• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia says it's first to offer full OpenGL 3.0 support

I'll make the point again.

There are no games of significance that need PhysX. At the moment all it is a fancy un-needed extra which most will be surpassed on selection of a GPU in light of more positive features (cost for one). Yes, it's a lovely extra, as is DX10.1, CUDA, OpenGL and Stream but they share the same characteristic - they are not needed at the moment. When hardware Physics comes the norm (I am of the view that MS will be the driver not Nvidia), current GPUs will be out of fashion. Buying on an infant feature for the medium/far future in computer hardware is silly, especially for a GPU.

There are many technologies and products that would be great if there was anything significant out to support them and sound fantastic in theory.

So what are you saying? that we should wait 3-4yrs for Microsoft to develop a physics API when we already have a perfectly good one?

Just take a look at GTA4 if you want to see why we need hardware physics now, it takes a quad core CPU to run it decently and people going from C2Q to i7 are seeing a 5-15fps improvement proving that even quad CPU's are a bottleneck, yet if it supported Physx you could probably get better performance on a single/dual core CPU.

Physics engines have been used in games for years and whilst graphics cards have steadily progressed to cope with newer graphics engines, we have now reached a point where a general purpose CPU (or four) just cannot cope with todays increasingly complex physics engines, they were never designed for complex physics and there's far more to game physics than just the obvious effects that a PPU brings like cloth, debris, fluid etc.

Why waste money on a high-end 4870x2 when it's progressively becoming held back by a CPU getting bogged down by physics? todays high-end GPU's are bottlenecked by CPU's at anything but super-high resolutions and I'd rather see a hardware physics solution than totally unnecessary 8-16 core CPU's which is where we'll be going without it (perhaps that's why AMD don't want it?).
 
Last edited:
I didn't question the need for hardware physics in the future, what I am questioning is the need for it now given the lack of any games which use the API.
 
I didn't question the need for hardware physics in the future, what I am questioning is the need for it now given the lack of any games which use the API.

Thats a paradox... games can't start using it until its there to be used... most developers are shy of dedicating their product to solutions that aren't widely accepted - which goes back to my original point - ATI would rather bang on DX10.1 - which for now can be worked around or doesn't bring anything useful to the table - than spend time getting a standard hardware physics system which as the poster above illustrated with GTA4 brings something tangible to game development.
 
Complex Physics in GTA4? Has somebody been playing a different game to what I have played on the 360 and PS3? The problems the PC version has are down to increased texture usage and more importantly much more geometry from the draw distance and traffic/pedestrian density.
 
Last edited:
so GTA is limited because of physics processing is it? are the consoles that good at it then?

It uses a software physics engine called Euphoria and the console versions run at about 25fps average, which is quite poor really.

I don't really see any other reason why GTA4 requires so much CPU power compared to other games.

layte said:
Complex Physics in GTA4? Has somebody been playing a different game to what I have played on the 360 and PS3? The problems the PC version has are down to increased texture usage and more importantly much more geometry from the draw distance and traffic/pedestrian density.

Complex by CPU standards yes.

You can turn the graphics right down and it barely runs any better at lower resolutions, it is still largely bottlenecked by the CPU.
 
So PhysX would be of no help here anyway, as firstly they their own Physics engine and secondly Euphoria works in a different way to most other engines. Maybe once OpenCL hits the ground running software developers will have a unified API to target for their various engines to provide acceleration when available, but that will be no fun as fanboys will have a large part of their arsenal taken away.
 
So PhysX would be of no help here anyway, as firstly they their own Physics engine and secondly Euphoria works in a different way to most other engines. Maybe once OpenCL hits the ground running software developers will have a unified API to target for their various engines to provide acceleration when available, but that will be no fun as fanboys will have a large part of their arsenal taken away.

Well this is the thing - there is no standard physics system - so developers either cut features or end up working up their own solution that in some cases runs less than optimally on just CPU... Euphoria does work a little differently to physx but I don't imagine it would have been too hard to implement it with the licensed source.
 
Perhaps R* wanted full control over everything, not everyone is happy with a pre-canned solution. Or bowing to the whims of a third party.
 
NVIDIA Demo: Cascades
Explore a fantastic world of endless rock formations and exhilirating detail. Watch majestic waterfalls cascade down exotic rock formations, while buzzing swarms of dragonfly-like inhabitants dive and play.

Sit back and watch as the water flows down over a cliff, sails through the air, and crashes back onto the rock below in a cloud of mist. Or, bring out your inner artist and interactively design your own waterfalls, creating a beautiful dreamscape for others to explore.

FEATURES:

* Microsoft DirectX 10: Every aspect of the Cascades demo demonstrates next-generation features enabled by DirectX 10, including the generation of terrain on the GPU, smart particle systems, and the high quality rendering of the scene.
* Procedural Geometry Creation: Rock structures are built by the GPU itself. Use the middle mouse button to pan endlessly up or down, while the GPU streams out new chunks of rock in an infinite variety of shapes and formations.
* Smart Particles: The waterfalls (and insect flocking) are driven by advanced particle systems with geometry shaders at their core. Realistic physics let water particles collide with the rock and flow over it naturally. Particles can even spawn other particles - for example, a water particle can spawn a mist particle when it collides with the rock at high speed.
* Hyper-Realistic Shading: Even after driving all of the new effects in this demo, the GPU still has plenty of horsepower left for unbelievable shading quality, and even Displacement Mapping. From the farthest vantage to the closest scrutiny of the surface, textures appear rich and palpable.
http://us.download.nvidia.com/downloads/nZone/videos/nzm_Cascades_tech.wmv
& listen to what is said.
http://www.nzone.com/object/nzone_cascades_home.html
Didn't need PhysX to do that & looks better.
 
Last edited:
Your missing the point that still uses hardware accelerated physics, albeit a very simplified and highly specific implementation - its not so much physx is essential but rather a hardware physics system is badly needed for future gaming titles and physx is so far the only viable solution with any kind of forward momentum... it works, its easy to use, free for most uses and cheap to license if you need to do something unique... very robust and great performance... whats there not to like... oh its nvidia proprietary never mind lets look at the options... open CL? oh you still have to do the leg work yourself, microsoft? nada, ATI? nada, intel? larabee might show its head eventually but performace will still be far inferior even to current GPUs let alone the GPUs that come out around the time its released.
 
If PhysX is so stupendously wonderful and free as some here love pointing out at every single opportunity, why do plenty of developers either create their own or buy in a different 3rd party solution?

The only people using PhysX seem to be those Nvidia are bunging bundles of cash at, the console devs don't seem interested so that means a huge portion of titles coming to the PC wont support it unless Nvidia pay for it (like Mirrors Edge). You can jump up and down beating your drum as much as you want, but until either a true open standard appears, or the facility for accelerated physics is available cross platform without interferance from a single third party it will continue to be a niche feature, relegated to paid for titles and inconsequential added extras. The best thing that could happen for the adoption of PhysX would it going truely open, none of this you can use it but we control where it goes rubbish that is being peddled right now. But the chance of that happening as about as slim as a NV/ATI thread in this place being free from fanboy arguments.

I'm not arguing against the need for acceleration, just the manner in which it eventually gets here and something being peddled solely in the interest of NV or ATI is not what we need and no matter how good it would be for us gamers in the short term, the long term prospects for competition and any semblance of price control would go straight out of the window. (see the original GTX280 pricing lolacaust)
 
intel? larabee might show its head eventually but performace will still be far inferior even to current GPUs let alone the GPUs that come out around the time its released.

"Will" is a very strong word, and Larrabee's architecture does seem to be pretty powerful from a technical standpoint.

Let's look at Larrabee's architecture:

Okay, for the most part it uses what could be deemed 'software rendering,' on something that looks similar to an X86 core. Except each 'core' has a 512-bit, 16 ALU wide vector unit, each ALU can either do a MADD (fused multiply+add) with Int32, Float32 or Float64 data, fetch data or a data type conversion. There are supposed to be anywhere up to 32 of these 'cores' per die, giving the theoretical high end Larrabee product 512 of these ALUs.

It's also speculated that as Larrabee is based on the X86 architecture its clock speed will also be in the 1.2-2GHz range. So that gives it a fair amount of shader data crunching power, to say the least, which is pretty much comparable with the shader architecture of a standard GPU, although a bit more complex.

Okay, so that covers shader horse power, isn't it completely programmable? Won't that have to share computational power with the rest of the core graphics functions? Well, texturing units are being done in hardware (it does anisotropic filtering, texture fetch, bilinear and trilinear filtering) because they would be slow on the shader cores so there goes that one. The only real hardware issue left is the render backends. Then I suppose there is the whole driver thing, but not all Intel drivers are atrocious, their chipset ones are pretty solid from my experience. Their IGP's drivers are a different matter, I guess it's down to how much money Intel shoves in that direction.

Oh and bandwidth wise it's supposed to use a 512-bit each way ring memory bus architecture so I think it's okay for that.

I'm still on the border as to whether or not it'll be a decent prouduct for gaming, but I don't think it's something to be ignored just because it's a graphics product by Intel.
 
"giving the theoretical high end Larrabee product 512 of these ALUs" This is a bit beefed up from what I heard before - which was 32 on the performance parts... if it does come out with 512 then it could potentially have some decent clout for physics.
 
Your missing the point that still uses hardware accelerated physics, albeit a very simplified and highly specific implementation - its not so much physx is essential but rather a hardware physics system is badly needed for future gaming titles and physx is so far the only viable solution with any kind of forward momentum... it works, its easy to use, free for most uses and cheap to license if you need to do something unique... very robust and great performance... whats there not to like... oh its nvidia proprietary never mind lets look at the options... open CL? oh you still have to do the leg work yourself, microsoft? nada, ATI? nada, intel? larabee might show its head eventually but performace will still be far inferior even to current GPUs let alone the GPUs that come out around the time its released.

You keep going on about the future which means nothing if there is no show of progress in a given time scale & is nothing more than hot air promises until it delivers on it promises as it does not have a good working history as a reason to believe in it & time is always an issue for people to give up hope with out credible results.
So far PhysX looks no better or more convincing than it did 2 year or so ago & have seen more impressive results with the Cascade demo above & ATI demo's & HL2 some years ago.
Having less convincing looking physics run smoother with dedicated hardware is of no interest to me or many others & why people did flood to the PhysX/CUDA card or by a gfx card for it unless that feature is important to them with what they use it for in the present with some application making worth while use of it.

PhysX being first does not automatically make it the best solution, history is full of such scenarios.

PhysX is not open & is a direct conflict of interest unless NV & ATI have a 50/50 share in its continued development which NV will unlikely ever agree as business comes first & foremost.

Best possible is a Direct-x Psychics API.
Good things come to those who wait as history as shown that Entering into some agreements for immediate gain can have some really long term damaging effects, like between AMD & Intel.
 
Good things come to those who wait as history as shown that Entering into some agreements for immediate gain can have some really long term damaging effects, like between AMD & Intel.

Yeah lets not have any progress coz in 25 years we'll be able to model everything on an atomic scale oh hang on theres no games using that now so it can't be needed so lets all go back and play quake 1.

I've never said physx is the best solution - but by far its the most progressed hardware accelerated solution and fairly robust at that. I'm sorry but the simple fact here is that ATI, Intel and MS just aren't stepping upto the table - theres a viable solution here and now so why not make use of it.

I'm sure if ATI showed some interest in using it they could negotiate an acceptable solution with nvidia... atleast nvidia has _said_ they are willing to work with ATI to benefit everyone but haven't had any reciprocation from ATI. Obviously thats taking nvidias word for it but in this specific case I believe them.
 
Last edited:
"giving the theoretical high end Larrabee product 512 of these ALUs" This is a bit beefed up from what I heard before - which was 32 on the performance parts... if it does come out with 512 then it could potentially have some decent clout for physics.

It's 32 'cores' with 16 ALUs per core, the ALUs are what companies like Nvidia and ATi refer to as 'cores' or 'shaders.'
 
It's 32 'cores' with 16 ALUs per core, the ALUs are what companies like Nvidia and ATi refer to as 'cores' or 'shaders.'

Yah I know - in presentation slides awhile back tho they were talking about 32 ALUs total on the mainstream performance parts. Its not something I've followed closely - infact at all really as the technology didn't seem to have much promise compared to modern GPUs at the time. They've obviously ramped things up a lot since.
 
Yeah lets not have any progress coz in 25 years we'll be able to model everything on an atomic scale oh hang on theres no games using that now so it can't be needed so lets all go back and play quake 1.

I've never said physx is the best solution - but by far its the most progressed hardware accelerated solution and fairly robust at that. I'm sorry but the simple fact here is that ATI, Intel and MS just aren't stepping upto the table - theres a viable solution here and now so why not make use of it.

I'm sure if ATI showed some interest in using it they could negotiate an acceptable solution with nvidia... atleast nvidia has _said_ they are willing to work with ATI to benefit everyone but haven't had any reciprocation from ATI. Obviously thats taking nvidias word for it but in this specific case I believe them.

You have basically totally ignored what i have said with that reply because your purely focusing on the API not the more important business reason for others not to use it & think that businesses will put users interest first regardless of possible ramification to the other parties based on a highly & i mean highly unlikely that NV will play fair based on NV history that is nothing more than a pipe dream purely because that's what you would like to happen.
Progress come when its beneficial to the business not to the consumer. thats why the product leaders drag their feet.

You have totally ignored historical facts as something is free to use & implement & when its widely adopted by the masses ..boom a patient or a demand for royalty appears.
 
Last edited:
However you spin it tho - nvidia is first to the table with a robust and easy to implement hardware physics API and there doesn't look like being anything from anyone else til the second half of 2009 at the earliest if then... so I'm critical of ATI/AMD, intel and MS as they've really not put any effort in to something thats going to be a major feature - like it or not - in the coming months and years...
 
Back
Top Bottom