• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia GPU PhysX almost here

Graphics are far more intensive than maths calculations. If this was not the case we wouldn't need a GPU now would we and would run Crysis fine on a CPU!

Actually the math for graphics and physics have a lot in common. That's why GPU's are better suited for Physics than CPU's.
 
I doubt it will be much benefit to G80 or G92 users, they’re starting to struggle in some new games as it is, neverrmind them doing physics at the same time, I'd like to be proved wrong though.
 
I think a wizard's hat be nice :p.
if they paid a lot for physx api then they need to get money back some how from their rivals if they use it nevermind us
 
“Yep you will need a second card if you want to keep your framerates up. There are only so many stream processors and that is what is been used for the physics.”
Surly that depends on the game. Freeing up the CPU and moving physics off it will in some games give a bigger FPS boost then the drop from running physics on the GPU.

True, i mean, everyone thinks physics is utitlised in only fps games. Take supreme commander for example, every projectile's path is calculated in real time, ok, so it's not majory advanced physics such as some other stuff that's been demo'd, but if an onboard ppu can tkae some of the load off the cpu, then i'm happy, as this game is VERY cpu limited. Ibet there isn't one single person out there that can run an 81x81 map with 1000 unit limit at +5 CONSTANTLY all the way through, hell, even at normal speed is pushing it...
 
True, i mean, everyone thinks physics is utitlised in only fps games. Take supreme commander for example, every projectile's path is calculated in real time, ok, so it's not majory advanced physics such as some other stuff that's been demo'd, but if an onboard ppu can tkae some of the load off the cpu, then i'm happy, as this game is VERY cpu limited. Ibet there isn't one single person out there that can run an 81x81 map with 1000 unit limit at +5 CONSTANTLY all the way through, hell, even at normal speed is pushing it...

Poke someone with a Skulltrail :cool:
 
True, i mean, everyone thinks physics is utitlised in only fps games. Take supreme commander for example, every projectile's path is calculated in real time, ok, so it's not majory advanced physics such as some other stuff that's been demo'd, but if an onboard ppu can tkae some of the load off the cpu, then i'm happy, as this game is VERY cpu limited. Ibet there isn't one single person out there that can run an 81x81 map with 1000 unit limit at +5 CONSTANTLY all the way through, hell, even at normal speed is pushing it...

But if your playing online all those projectiles and paths have to communicated to the other computer.. So as 95% of the gaming population will not have a second GPU for physics it means that the developers having to pour money into a investment that only benefits a very few people - and of those 5% then not everyone likes that form of game...
 
Poke someone with a Skulltrail :cool:

haha, 2nd gfx card, even if it was dual gx2's would still be cheaper than skulltrail ;)


But if your playing online all those projectiles and paths have to communicated to the other computer.. So as 95% of the gaming population will not have a second GPU for physics it means that the developers having to pour money into a investment that only benefits a very few people - and of those 5% then not everyone likes that form of game...

Pretty good point there :| *cries*
 
“So as 95% of the gaming population will not have a second GPU for physics it means that the developers having to pour money into a investment that only benefits a very few people –“
Or if the game is more CPU intensive then GPU you can do those physics on the GPU for a speed boost and those in multiplayer without the right GPU have the physics fall back onto the CPU. The API is made to fall back to the CPU for those who dont have hardware its hardly a lot of money thats needed for the devs as its all one package.


95%! Pretty sure it’s more like 20% to 40% if one GPU can do it and there is no reason why it couldn’t.





“But if your playing online all those projectiles and paths have to communicated to the other computer..”
That’s not a problem. Why would it be?
 
“So as 95% of the gaming population will not have a second GPU for physics it means that the developers having to pour money into a investment that only benefits a very few people –“
Or if the game is more CPU intensive then GPU you can do those physics on the GPU for a speed boost and those in multiplayer without the right GPU have the physics fall back onto the CPU. The API is made to fall back to the CPU for those who dont have hardware its hardly a lot of money thats needed for the devs as its all one package.

95%! Pretty sure it’s more like 20% to 40% if one GPU can do it and there is no reason why it couldn’t.

I see what you're getting at - however the game has to play on their minimum specs. If a GPU affected ingame objects by increasing the number then this means (a) higher network bandwidth and (b) more additional CPU and GPU power required by each client playing the game.
The games engines currently still use the CPUs to perform boundary checks, clipping etc. It's the memory bus that is usually the limiting factor (I should point out I've been involved with GPGPU for sometime).

The CPU still has to do something with the data.. particle demonstrations are easy because you don't have to spend time restructuring the data sets so it will be interesting to see what they get from real world processing..

One other point - I bet you will not be able to mix and match the AMD, nVidia, Intel cards - thus nVidia get a lock-in because of the higher capital cost for a gamer to move to the latest better competitor card (because he has to add the cost of replacing the PPU-GPU card too).
 
Last edited:
“I see what you're getting at - however the game has to play on their minimum specs.“
True but Nvidia have said AMD can use it for free and as it can work on 1 card most gaming PC’s could have support. If done correctly its going be usefull for all gamers and most real games have ATI or Nvidia GPU's and if you have a 3+ year old GPU chance's are you below the recommanded specs.





”It's the memory bus that is usually the limiting factor (I should point out I've been involved with GPGPU for sometime).”
Memory bus isn’t a problem as very little data will need to be sent across it due to physics. The hard part isn’t sending the data back and forth its processing the data. You need a lot of internal bandwidth to process the data but very little external bandwidth to send the results.





“One other point - I bet you will not be able to mix and match the AMD, nVidia, Intel cards - thus nVidia”
No one has said anything about needing a 2nd card the only thing that been said is Nvidia has opened the API for anyone to use including AMD for free. It should work fine on 1 card.
 

if you have a 3+ year old GPU chance's are you below the recommanded specs.
No one has said anything about needing a 2nd card the only thing that been said is Nvidia has opened the API for anyone to use including AMD for free. It should work fine on 1 card.


Without a doubt since it only works on 8 series and above so that makes the oldest card it will work on 18 months old I think.

Maybe not a second card but as the CEO states this will encourage people to upgrade their card or buy a 2nd one for the physics so expect it to cripple current available cards in the framerate front.

At the end of the day Nvidia is only giving away free to ATI so games designers use it in a lot of games making us upgrade or buy a 2nd card.

Nvidia hoping your upgrade/2nd card is theirs - that depends on the new products they bring out really.
 
“I see what you're getting at - however the game has to play on their minimum specs.“
True but Nvidia have said AMD can use it for free and as it can work on 1 card most gaming PC’s could have support. If done correctly its going be usefull for all gamers and most real games have ATI or Nvidia GPU's and if you have a 3+ year old GPU chance's are you below the recommanded specs.

However I don't believe the 'standard' would be open. The net effect becomes a standard compliance war. Also it's feasible that they can withdraw the support at any time..
Additionally - attempting to comply to a foreign API would cost AMD in terms of delays to deliver to market thus nVidia gain.

”It's the memory bus that is usually the limiting factor (I should point out I've been involved with GPGPU for sometime).”
Memory bus isn’t a problem as very little data will need to be sent across it due to physics. The hard part isn’t sending the data back and forth its processing the data. You need a lot of internal bandwidth to process the data but very little external bandwidth to send the results.

True - intensive bandwidth for computation is kept on the card however if one card is used for physics and one for graphics, it still requires the transfer of data between them - as these are DMA based (ie there's a specific data transfer direct to system memory just as any other hardware device such as a hard disc controller).
There's no interlink for transferring data between the current standard of cards - also the memory controllers cannot execute a fragment program and transfer data not being used simultaneously either).

“One other point - I bet you will not be able to mix and match the AMD, nVidia, Intel cards - thus nVidia”
No one has said anything about needing a 2nd card the only thing that been said is Nvidia has opened the API for anyone to use including AMD for free. It should work fine on 1 card.

It's true this is possible as long as you're not swapping data thus only large memory cards would be able todo this (common sense). Running legacy last gen support for this would make the market bigger.
The current memory requirements are driven by the larger displays and thus higher resolution textures with more processing power too. Add the physics engine and we're already looking at either a ridiculous single card or multiple cards.

From doing development (investigating the automatic vectorisation of code from GCC on GPUs) I can say that data transforms are the killer for any application.

There's a big difference in the data formats required to deliver the best performance on a GPU, graphics and other processing - to get the best results it's an all or nothing approach in the games engine. What's good for SPMD (GPU) is not good for SIMD (CPU). Thus if games developers just add this as an add-on to the API then I don't think there's much benefit for anything useful..

Now with the majority if the customer base not having the GPU horsepower.. they have to either (a) buy a new outright top end GPU, (b) buy new normal GPU and use the old one for PPU, (c) not bother with GPU based physics.. now weigh the PPU against the possibility of running better resolution... (for the average joe) which do you think they'll take?
 
They are bringing it out at this point in time so that they can beat ati at 3dmark vantage when it comes out..which apparently is very soon.

:rolleyes:

That's just a quick remark - of course there's intel and it will probably be a big thing in future but I thought I would put in a cynical remark.

Horses for courses - both manufacturers do these things - I'm on both sides of the fence. :p
 
Last edited:
That monster single card? Well it may not be too much for the GPU to link two discrete GPUs on a card via a form of hyper-trasnport -- however slow data transfer between GPU memory would be useful albeit slower. I don't believe nVidia's current generation of GPU has this ability.
Only thing is - with the economic down turn, just how big is the market for a £500+ graphics+PPU card that is difficult to use in a scientific application.

Don't discount Intel - they don't like anyone disputing their "crown" and an upstart such as nVidia attempting to gain in their parallel processing market share will gain a similar response to AMD..
 
Last edited:
Back
Top Bottom