Thats a highly opptimised solver for some fairly primitive RB interactions... quite impressive how many boxes it can chuck around... but it wouldn't do much else...
Nevertheless it looks impressive.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Thats a highly opptimised solver for some fairly primitive RB interactions... quite impressive how many boxes it can chuck around... but it wouldn't do much else...
I'm not sure about that - I dare say that games publishers wouldn't be to happy at being accused of massivly optimising the game for one GPU architecture, and as a result, be accused of nobbling, in this case, ATI users with a 'restricted' game.
I suspect this is why CUDA hasn't been seen in games much, at least not to any really noticeable effect - it might be good for Nvidia users, but it would be bad for the publishers, who really dont' care what hardware their game runs on, as long as it sells so they can recoup development costs. They can't do that by lockling out half of their potential audience!
OpenCL/DirectCompute allow a way to get around that though, by being hardware agnostic - less of a problem for publishers, and hopefully, games coders [I know a couple - sounds like a nasty job], who don't have to attend a vendor sponsored training course to get stuff working.
FWIW I'm not one of these people who think that CUDA is evil evil evil - it was, rather like the Ford Model T, the one that brought GPGPU to mainstream attention and made it practically usable. Being first, however, doesn't make it worthier than any other solution. Unless you are a Ford man, natch.
The wider adoption that OpenCL will naturally get as a reult of not being vendor locked should accelerate it's development, after all, eveyrone wants their code to run faster, and now they don't have to get specialised hardware to do it - with OpenCL, any recent GPU can be used to run GPGPU tasks. This is not a bad thing in the slightest.
Go ahead![]()
Was intrigued by your comment, so just went to watch. Was not impressed, just a bunch of background effects, which is what you'd expect from optional PhysX. We won't get anything meaningful integrated into gameplay until there's an open standard which works on all major graphics hardware.Am I the only to sit here with jaw wide open at this? Thats a stunning difference.
Was intrigued by your comment, so just went to watch. Was not impressed, just a bunch of background effects, which is what you'd expect from optional PhysX. We won't get anything meaningful integrated into gameplay until there's an open standard which works on all major graphics hardware.
Sure but it does make anything I say a little more likely than purely wild unsub'd speculation...
Its not just about performance increases (tho if you could bring GPGPU properly to bare on some tasks you would see more like 400+% gains) its about all the things you could implement without unplayable performance - AI especially is something that spends a lot of time sequentially cross comparing static data. You could massively increase the quality and realism of things like dynamic way/path finding without bringing performance to its knees.
http://www.youtube.com/watch?v=XSPm5d8GwLI
It'll have to be one I made earlier as I'm working on something today... ok its not prettybut it can handle around 500 RBs and ~30 rag dolls on a Q6600.
look at batman or Assassin's Creed
edit:- i don't think nvidia would care what people think if it gave them a edge over ati
There was nothing in the batman that could not be done visually with the usual tricks.
The option for scripted repeated effects or the effects using physx should be options.
Flags, water, flames, breaking glass & particles are nothing new the problem is that NV get some of them taken out to make a more noticeable visual difference in favour of physx as its easier to notice something not being there at all then the same thing being there but moving more realistically.
Its more complex than that... a lot of developers have experience with havok and would prefer hardware accelerated havok, and theres some games that are more suited to physx and some work better with havok... what we need is a common platform to run both of these ontop of... unfortunatly nvidia tried to tie physx to running ontop of CUDA when approaching other parties which is why many told them to get lost...
There was nothing in the batman that could not be done visually with the usual tricks.
The option for scripted repeated effects or the effects using physx should be options.
Flags, water, flames, breaking glass & particles are nothing new the problem is that NV get some of them taken out to make a more noticeable visual difference in favour of physx as its easier to notice something not being there at all then the same thing being there but moving more realistically.
I not saying that,what i am saying nvidia and ati will do almost anything to get a edge be that with duda/physx or what ever