• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Havok 'Red Dress Cloth' demo

That mannequin is a tasty dish.

Meanwhile, the physics are pretty damn good.

Wonder how long before they'll be able to implement it on run of the mill items, curtains, table cloths and such.
 
I'm not too fond of these tech demos set in an empty space. Let's see this as part of an FPS action scene and see how chuggy it gets...
 
Because (and I am hitting myself for being drawn into this utterly tired argument yet again) you keep extolling the virtues of a closed proprietary API controlled by a single company, and only available on that companies hardware, and being tightly controlled to sell their hardware and related software technologies as if they were a standard. I would rather go without than hand the industry to a single company and their hardware, be they NV, ATI or Intel. Only a fool would disagree.

This is what most people have to say about this whole retarded situation, and why you (possibly a bit unfairly) get a lot of flack as you continue to extol the virtues of a technology that would bind the games industry to Nvidia and berate anything and anyone who dares to question that fact. A prime example being your insistence that ATI are holding things back by not begging for scraps from the NV PhysX table, when it looks like their hold out will get us a API that is fully capable of being accelerated on just about any piece of hardware out there using an industry agreed open API rather than the closed proprietary CUDA. A situation that is far preferable to your belief that people should have jumped on the PhysX bandwagon at the first opportunity.

Anyway, back to the point of this thread before the exciting talk took hold. The demo videos look like the usual physics fare, do we know what hardware platform they were running on?
 
Last edited:
Because (and I am hitting myself for being drawn into this utterly tired argument yet again) you keep extolling the virtues of a closed proprietary API controlled by a single company, and only available on that companies hardware, and being tightly controlled to sell their hardware and related software technologies as if they were a standard. I would rather go without than hand the industry to a single company and their hardware, be they NV, ATI or Intel. Only a fool would disagree.

This is what most people have to say about this whole retarded situation, and why you (possibly a bit unfairly) get a lot of flack as you continue to extol the virtues of a technology that would bind the games industry to Nvidia and berate anything and anyone who dares to question that fact. A prime example being your insistence that ATI are holding things back by not begging for scraps from the NV PhysX table, when it looks like their hold out will get us a API that is fully capable of being accelerated on just about any piece of hardware out there using an industry agreed open API rather than the closed proprietary CUDA. A situation that is far preferable to your belief that people should have jumped on the PhysX bandwagon at the first opportunity.

Anyway, back to the point of this thread before the exciting talk took hold. The demo videos look like the usual physics fare, do we know what hardware platform they were running on?

I wasn't originally extolling the virtues of one particular solution... the arguement has gone on so long and been warped by a couple of peoples' agendas that I have at times made a stand by physx - which has been taken as preference by those with an agenda against me for whatever reason...

I originally slated ATI for not taking active steps towards a hardware physics solution as I know its something that the game development industry is coming to need and there have been games/features in games put on the back burner because of the lack of it... the ideal solution at the time I brought it up would be for ATI to adopt physx because it was already established, is fairly well documented and has a proven track record in terms of stability, etc.

I have also slated ATI because their track record isn't great when it comes to implementing and supporting the features that game developers actually want and are asking for... and I remain skeptical about their ability to deliver this in the form of havok+openCL+stream and actively support and develop the features that game developers actually need.

Right now it looks like the whole situation is going to be fractured by competing standards and game development held back another couple of years while things settle down...
 
Think you'll find its Nvidia doing the holding back, as ATi's got the better tech.

Wheres Nvidai's Dx10.1, their Tesselation unit etc....

The holding back I was talking about was game development specifically... theres very little need for any of the features from DX10.1 in games at the moment and little to no interest from game developers - the only advantages from it was performance on ATI hardware.

Tesselation again isn't something that the majority of developers require at the moment... tho I believe epic has some interest in it. Infact john carmack from id software has explicitly stated that the direction from his engines/games is the reverse and ultra high detail characters/assets will be reduced in detail down to their target LOD for the final product without any extrapolation upwards. Personally I think a bit of mesh smoothing and so on to increase detail has nice results but it doesn't seem to be the opinion of the majority of game developers at the moment and certainly the lack of the feature isn't holding up progress at the moment, tho that may change once DX11 becomes a development target.

Infact I will quote the man on this one heh:

Is AMD’s tessellation engine that they put in the R600 chips anywhere close to what you are looking for?


CARMACK: No, tessellation has been one of those things up there with procedural content generation where it’s been five generations that we’ve been having people tell us it’s going to be the next big thing and it never does turn out to be the case. I can go into long expositions about why that type of data amplification is not nearly as good as general data compression that gives you the data that you really want. But I don’t think that’s the world beater; I mean certainly you can do interesting things with displacement maps on top of conventional geometry with the tessellation engine, but you have lots of seaming problems and the editing architecture for it isn’t nearly as obvious.
 
Last edited:
As I said earlier, it will be Intel supporting Havok, and they certainly do have a very impresive track record of developer relationships. All ATI/Nvidia/Who ever have to do is make sure their OpenCL interface is up to standards. Perhaps Nvidia will stop trying to force the proprietary CUDA onto the industry and port PhysX to OpenCL.

At the risk of repeating myself (yet again). ATI taking up PhysX (and therefore CUDA) would be bad for them and the industry as a whole, as it would leave everybody at the mercy of the whims of Nvidia. I would think even yourself would see that would be of no benefit to anybody apart from their shareholders. All Nvidia need to do to stop the industry forking off into different directions is to dump the reliance on CUDA and port PhysX to OpenCL. If they refuse then they will be responsible for things being held back, not ATI as you seem to insist.

Anyway, this seems to be heading down the tired old idealogical tubthumping path that these discussions usually become. I have better things to be doing at 00:10.. like sleeping.
 
As I said earlier, it will be Intel supporting Havok, and they certainly do have a very impresive track record of developer relationships. All ATI/Nvidia/Who ever have to do is make sure their OpenCL interface is up to standards. Perhaps Nvidia will stop trying to force the proprietary CUDA onto the industry and port PhysX to OpenCL.

At the risk of repeating myself (yet again). ATI taking up PhysX (and therefore CUDA) would be bad for them and the industry as a whole, as it would leave everybody at the mercy of the whims of Nvidia. I would think even yourself would see that would be of no benefit to anybody apart from their shareholders. All Nvidia need to do to stop the industry forking off into different directions is to dump the reliance on CUDA and port PhysX to OpenCL. If they refuse then they will be responsible for things being held back, not ATI as you seem to insist.

Anyway, this seems to be heading down the tired old idealogical tubthumping path that these discussions usually become. I have better things to be doing at 00:10.. like sleeping.

Yeah but its AMD/ATI mainly who are championing the havok cause, intel currently really doesn't have any hardware capabilities for it. Tho if intel really does dig in and nvidia gets their openCL in order then its a whole different story...

While I agree on your point about ATI adopting physx... nvidia (for whatever it may or may not have been worth) were prepared to give certain guarantees so that ATI weren't left at a disadvantage... they were unusually open to negotiation on that matter infact... and this is the reason I slated ATI at the time... not only did they show no interest in the offer, they showed absolutely no interest in any kind of hardware physics solution.

I am actually ambiviant as to which solution becomes adopted, while I have a slight preference towards physx as it feels more natural and less in the way for incidental objects, both physx and havok are very solid solutions on the software side and havok has a decent following within the game development industry.
 
It was obvious AMD wouldn't support Physx, that would be like an oil company pushing electric or water fuelled cars.

If Havoc kills off Physx then Intel can probably put a stop to AMD's ATI cards doing Havoc; and AMD wouldn't have a major problem with that either, it would hurt NVidia the most.

Like I've said said before this is a battle between CPU and GPU over the future of game physics, Intel & AMD versus NVidia, forget about ATI they are a non-entity now aside from clever branding.

AMD’s Catalyst product manager, Terry Makedon, revealed on his Twitter feed that AMD would reveal its “ATI GPU Physics strategy,”

AMD's strategy eh? ;)

I just want the best technology to win, at the moment there is no doubt a GPU based solution is the best due simply to the sheer horsepower in GPU's.

The trouble with Havoc is I can see Intel prevailing and then ditching physics back onto the CPU, sure they can beef up CPU's in the future with better physics processing capability and sell us all brand new "physics ready" CPU's but they'll still only just be reaching the level where Physx is at today (in raw hardware horsepower terms) and between now and then GPU's will have become far more powerful.
 
Last edited:
Think you'll find its Nvidia doing the holding back, as ATi's got the better tech.

Wheres Nvidai's Dx10.1, their Tesselation unit etc....

how is that then? if ati pull out all this tech why is it not used in games? its simple ati may pull out tech but they cant back it up or provide support for it.
best ati can ever do is wait for nvidia to innovate then ati come along and immitate.

nvidia provide the tech such as physx and there are already games out there using it. that happens because nvidia can be depended upon by developers. afterall ati still havent got a true unified architecture out yet hence they have large limitations on what they can do. last i checked ati still using 5 fixed function shader units to act as 1 "stream processor". they still have a lot to learn.

as for stuff like tessalation and dx10.1, it wont ever kick off not because ati already have it, no, ati wont provide support it. games will only start to use tessalation when nvidia decide its time to implement it. since nvidia can be relied upon by developers.
 
how is that then? if ati pull out all this tech why is it not used in games? its simple ati may pull out tech but they cant back it up or provide support for it.
best ati can ever do is wait for nvidia to innovate then ati come along and immitate.

nvidia provide the tech such as physx and there are already games out there using it. that happens because nvidia can be depended upon by developers. afterall ati still havent got a true unified architecture out yet hence they have large limitations on what they can do. last i checked ati still using 5 fixed function shader units to act as 1 "stream processor". they still have a lot to learn.

as for stuff like tessalation and dx10.1, it wont ever kick off not because ati already have it, no, ati wont provide support it. games will only start to use tessalation when nvidia decide its time to implement it. since nvidia can be relied upon by developers.

Amd have a lot to learn give us a break, i ask you this who do you think more happy with their gpu's at the moment since the june last year.
 
Because (and I am hitting myself for being drawn into this utterly tired argument yet again) you keep extolling the virtues of a closed proprietary API controlled by a single company, and only available on that companies hardware, and being tightly controlled to sell their hardware and related software technologies as if they were a standard. I would rather go without than hand the industry to a single company and their hardware, be they NV, ATI or Intel. Only a fool would disagree.

This is what most people have to say about this whole retarded situation, and why you (possibly a bit unfairly) get a lot of flack as you continue to extol the virtues of a technology that would bind the games industry to Nvidia and berate anything and anyone who dares to question that fact. A prime example being your insistence that ATI are holding things back by not begging for scraps from the NV PhysX table, when it looks like their hold out will get us a API that is fully capable of being accelerated on just about any piece of hardware out there using an industry agreed open API rather than the closed proprietary CUDA. A situation that is far preferable to your belief that people should have jumped on the PhysX bandwagon at the first opportunity.

Anyway, back to the point of this thread before the exciting talk took hold. The demo videos look like the usual physics fare, do we know what hardware platform they were running on?

Excellent !
 
Amd have a lot to learn give us a break, i ask you this who do you think more happy with their gpu's at the moment since the june last year.

nvidia users are going by the amount of driver issues ati users are having. especially with broken dxva acceleration 80% of the time with ati.
dont get me wrong iv used my fair share of ati cards and i still have a few chugging along in a couple of machines here, but there is always driver issues with them.

i now see why nvidia charge more for thier cards, unless ati can match nvidia on driver support as well as performance consistancey then its always going to be nvidia for me from now on.

(all stuff im this post is IMHO only)
 
nvidia users are going by the amount of driver issues ati users are having. especially with broken dxva acceleration 80% of the time with ati.
dont get me wrong iv used my fair share of ati cards and i still have a few chugging along in a couple of machines here, but there is always driver issues with them.

i now see why nvidia charge more for thier cards, unless ati can match nvidia on driver support as well as performance consistancey then its always going to be nvidia for me from now on.

(all stuff im this post is IMHO only)

To be honest the first 6 months with my 320 the drivers were crap with my hdtv,so either company aren't that good, it is just swings and roundabouts, i am more concerned with how nvidia been acting the last year or so as the hardware from both company's is about equal and i do feel myself that ati have the better hardware as it seams to scale so well and yes the drivers seam not quite as good as nivida's but as i am not using a ati card i cannot comment on that really.

What ever you think about amd if they never brought out their 4*** people would be paying a lot more money and nvidia gpu's would not be going forward so fast.

I do think amd is using havok because they cannot afford to be tied to something nivida has a tight control on plus (this is a guess) as they make cpu's and gpu's and are trying to get havok to work on both it might suit them aswell,even if intel really pushs havok useage with the cpu which they could still use.
 
Back
Top Bottom