• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

All current accelerated PhysX content works with the GPU drivers

Soldato
Joined
29 May 2006
Posts
5,386
“Last but certainly not least, Manju Hegde, former CEO of Ageia, offered an update on his team's progress in porting the physics "solvers" for the PhysX API to the GPU in the wake of Nvidia's buyout of Ageia. He said they started porting the solvers to CUDA roughly two and a half months ago and had them up and running within a month. Compared to the performance of a Core 2 Quad CPU, Hedge said the GeForce GTX 280 was up to 15X faster simulating fluids, 12X faster with soft bodies, and 13X faster with cloth and fabrics. (I believe that puts the GTX 280's performance at roughly six to 10 times that of Ageia's own PhysX hardware, for what it's worth.) Their goal is to make sure all current hardware-accelerated PhysX content works with the GPU drivers.

Hegde also pointed out that game developers have become much more open to using hardware physics acceleration in their games since the acquisition, with 12 top-flight titles signing on in the first month, versus two titles in Ageia's two-and-a-half years in existence. Among the games currently in development that will use PhysX are Natural Motion's Backbreaker football sim and the sweet-looking Mirror's Edge. “


Source http://techreport.com/articles.x/14934/4

Apart from the strange 2 title comment which isn’t true this is great news

EDIT:
"So then, in theory any CUDA enabled GPU will run Physx. Some games we already saw running using PhysX:

Space Siege, gas powered games
Nurien, a social network platform
Bionic Commando, CAPCOM, GRINN
Natural Motion, Backbreaker game
APB Realtime worlds
Stalker Clear Sky realtime debris - cloth
Race driver - GRID - Phill Scott - cloth physics in flags
Gearbox software Brother in Arms - Hells highway & Aliens: Colonial marines and Borderlands."


http://www.guru3d.com/article/geforce-gtx-280-review-test/9
 
Last edited:
Compared to the performance of a Core 2 Quad CPU, Hedge said the GeForce GTX 280 was up to 15X faster simulating fluids, 12X faster with soft bodies, and 13X faster with cloth and fabrics. (I believe that puts the GTX 280's performance at roughly six to 10 times that of Ageia's own PhysX hardware, for what it's worth.)

I don't know if you recall me saying this before, over and over? I also said that until physics acceleration was handled by the GPU we would not see wide support in software.

Anyway, now that physics acceleration is being handled by GPUs, and has been taken in by companies with enough muscle to affect the market (AMD and Nvidia), I'm sure we'll see some movement in software terms. Hardware physics acceleration should become commonplace over the next 12-18 months.

What I'm expecting now is a couple of years of competing standards (Havoc Vs PhysX). After a good 'ol fashioned battle (VHS / Blu-ray style) we should see a standardised physics engine emerge (probably from Microsoft, bundled into their DirectX API). It's possible that one of the two formats will 'win out', but much more likely is that a third, more flexible, more general (and perhaps less efficient) standard will emerge.

The crystal ball speaks once again :p
 
Yeah, its actually useful now, GeForce 8+ cards are a lot more powerful than the PhysX was and also more common, you can keep your old one while upgrading etc
 
I'd be suprised if MS doesn't put it into directx - its a pretty obvious match.

But in practical terms it isn't easy.

Physics programming requires much more flexibility than graphics-based programming.

Currently (or soon...), we will use direct-X for the graphical side, and CUDA / CTM for the physics processing. Whatever direct-x physics implementation is made will have to unify the different GPGPU APIs from AMD and nvidia, and this is no simple task - it's like trying to create a coding system which is common to both C and Fortran (for example). Not that it's impossible, but it's certainly not something that can be knocked together in 12 months by a small team of say 50 or so. Currently there is nothing to press MS to spend the vast amount of funds which would be required to 'do the job properly'.

But yes, I think it's inevitable. Just not for a few years.
 
Last edited:
ya I know its a lot of worrk and will not be soon if we assume its not already beign worked on - there was some wispers of this on some of the dev forums a while back.
The crux is though havign 2 seperate api's talking to hardware is not something thats desirable and is somethign ms is very likely to prioritise to get into directx where that type of abstraction belongs in the windows world, else they end up with a support nightmare. I will guess they will look more to havok than PhysX as they are friendler with intel and amd are reportedly looking at havok as well. With the added bonus of sticking it to nVidia for spoiling thier dx10/10.1 plans.
 
Last edited:
I'm assuming it is ok but haven't seen evidence - is it possible to keep my old 8800 GTS 640 and run it for Physics, and then stick a 4870 or some other ATI card in my system for graphics?
 
I'm assuming it is ok but haven't seen evidence - is it possible to keep my old 8800 GTS 640 and run it for Physics, and then stick a 4870 or some other ATI card in my system for graphics?

Depends how they do the drivers... it _should_ be possible as CUDA and the PHYSX routines don't need to be part of the current rendering pipeline... but knowing nVidia they will probably tie it all into the current context :(
 
So Nvidia/PhysX/Cuda and Intel/AMD/Havok/Larabee? But Larabee is still some way away. But in the long run Larabee is very similar to Nvidia's approach, with many simple inorder execution cores. But Larabee will be programmed using X86 "style" instructions, so very easy to learn for existing PC programmers.

I wonder if AMD's link with Havok, will mean havok has to have drivers for AMD's shaders, and intels Larabee.

There's certainly a possibility that MS will implement DirextX-Physics, but I think they will sit on the sidelines and let the hardware guys slog it out, and then pick the "best" hardware and write an MS physics engine that uses that hardware.
 
Depends how they do the drivers... it _should_ be possible as CUDA and the PHYSX routines don't need to be part of the current rendering pipeline... but knowing nVidia they will probably tie it all into the current context :(


You need a Nvidia GPU with a CUDA-supported Nvidia driver set in order to use CUDA. This makes sense, since CUDA uses the hardware, and the drivers are simply an interface between the hardware and software.

So whichever way you look at it, for this to be possible you would need to have both ATI and Nvidia drivers installed. I don't see this being possible on the same OS install.
 
So Nvidia/PhysX/Cuda and Intel/AMD/Havok/Larabee? But Larabee is still some way away. But in the long run Larabee is very similar to Nvidia's approach, with many simple inorder execution cores. But Larabee will be programmed using X86 "style" instructions, so very easy to learn for existing PC programmers.

I wonder if AMD's link with Havok, will mean havok has to have drivers for AMD's shaders, and intels Larabee.

Larabee is first and foremost a GPGPU solution - hence the focus on x86-style programming compatability. This is where they believe they will win their market share.

It's true that we may eventually see a gaming GPU based on Larabee, but it should follow some time after the initial GPGPU foray. Larabee's strength is touted to be flexibility rather than raw computing power, so I'm not sure how good a gaming GPU it will make.
 
That is a good point

I've had ATI and nVidia drivers installed on the same PC back in the day - but from what I hear they don't co-exist very well any more...
 
That is a good point

I've had ATI and nVidia drivers installed on the same PC back in the day - but from what I hear they don't co-exist very well any more...

I suppose there is no real reason why they *shouldn't* be able to co-exist... That said I don't see either side going out of their way to make it happen :(

How long ago was 'back in the day' out of curiosity? Could you use both cards directly with the same OS?
 
Hiya...sorry to sound stupid..but does this mean that if you have the latest nvidia card with cuda, there is no need for a Ageia PhysX Accelerator card?
 
Hiya...sorry to sound stupid..but does this mean that if you have the latest nvidia card with cuda, there is no need for a Ageia PhysX Accelerator card?

That's the claim, although we will have to wait until we can get our hands on the driver to test out the claim.

Remember though, the CUDA physics acceleration is not entirely 'free'. Using shader pipes for PhysX stuff will take them away from graphics processing.

With games currently programmed for the PhysX card, you should only be looking at a small percentage of your high-end cards being taken away for physics processing (say 10-20%). But if you have an 8600 etc this could be significantly higher.
 
How long ago was 'back in the day' out of curiosity? Could you use both cards directly with the same OS?

We are talking Windows 98 (both cards running in same OS) around the release of the Voodoo 3 cards - I had a dual monitor setup with a TNT2 Ultra (AGP) and some ATI Rage Fury (PCI) card - can't remember the exact model now - I swapped it out for a voodoo 3 2000 PCI when they came out.

So quite a while ago :P
 
Back
Top Bottom