• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Let Battle Commence

Can you back this up please, or like most other stuff you post it's rumour or from a blog ;)

Theres nothing I can backup without potentially getting other people into trouble...

If you don't believe me fair enough but personally I think my track record for being right is pretty good.
 
We cant dismiss cuda getting the ball rolling sooner but i would never think for a second that no one else would have done the same & would be the same as saying if it weren't for 3DFX we would still have no 3D gfxcards today or if it was not for Matrox there would never be multi screen gaming.

Even ATI had its avivo converter Folding@home way before CUDA so the ideas of other things running on the GPU besides gfx were already brewing.

maybe you missunderstood my point or more likely i didnt put it across too well. im not saying that cuda was the first or the best by any means, but it was brought to the masses and the masses were told about it (or had it rammed down thier throats if your anti NV :D ). that is one of Nvidia strengths, when they have a message to get across they make sure they get it across, they wanted the world to know about gpu computing, so cuda was plastered everywhere. and this is the reason why i say you cant dismiss cuda as being a significant step towards gpu computing becoming acepted across the board.
 
Theres nothing I can backup without potentially getting other people into trouble...

If you don't believe me fair enough but personally I think my track record for being right is pretty good.

Do you really think NV offered out PhysX etc etc to rivals? Come on, it's just a PR stunt to make people who are so easily led believe them. ..

If you had the consumers in the forefront, you would want it to be part of the industry standard, ie DX11....but however you spin it you're bias to the green side ;)

Offering it out (if it ever happened, who are you protecting, MI6 agents with your info?!) is not an industry standard....
 
Pictures of the GTX 380 surface on the web...

nv380.jpg
 
Sure I have no proof... as usual... but I'll prolly be proved right as usual... was I wrong about the number of SPs on the 5870, the performance gains, the launch price point, etc? surely if it was all so wild and unsubstantiated I'd be wrong more often than right...

Being right about something does not automatically make all your future statement about anything else so.
Proof of hardware that will has become available & proof of correspondence are competently different things.

RE: DX maybe not - but that wasn't my point, my point was something along the lines of... games like sup com could run silky smooth today using CUDA for a lot of the heavy parallel processing, AI especially is something that can be massively sped up in this regard... instead we will have to wait until atleast DX11 getting upto speed and probably more likely DX12 before we see anything close to the same level in what it can offer.

You mean like the dx11 ~38% performance boost over dx10 on Battle Forge
Yes, currently only the new SSAO is compute shader accelerated. I had although some tessellation stuff prepared but with the RTS typical camera setup there was no visual benefit from it.

But regular BattleForge players know that we add new stuff quite often. Therefore it’s possible that future patches will bring more DX11 goodness. There are still some shaders that might profit from Shader Model 5 but we need to test this more in detail.

Correct every system that have the DX11 runtime and at least a DX10 card will make use of the new runtime. But it would only run with the feature level that is supported by the hardware. The new SSAO compute shader requires feature level 11. There is although a feature level 10 pixel shader for the new SSAO but it is slower than the compute shader.

We are seeing improvements up to ~38% in the average FPS when comparing feature level 10 and 11 on the same hardware.
http://forum.beyond3d.com/showpost.php?p=1339222&postcount=3801
 
Last edited:
maybe you missunderstood my point or more likely i didnt put it across too well. im not saying that cuda was the first or the best by any means, but it was brought to the masses and the masses were told about it (or had it rammed down thier throats if your anti NV :D ). that is one of Nvidia strengths, when they have a message to get across they make sure they get it across, they wanted the world to know about gpu computing, so cuda was plastered everywhere. and this is the reason why i say you cant dismiss cuda as being a significant step towards gpu computing becoming acepted across the board.

It will not be accepted across the board if only a NV gfxcard can be used for it.
 
Being right about something does not automatically make all your future statement about anything else so.
Proof of hardware that will has become available & proof of correspondence are competently different things.



You mean like the dx11 ~38% performance boost over dx10 on Battle Forge

http://forum.beyond3d.com/showpost.php?p=1339222&postcount=3801

Sure but it does make anything I say a little more likely than purely wild unsub'd speculation...

Its not just about performance increases (tho if you could bring GPGPU properly to bare on some tasks you would see more like 400+% gains) its about all the things you could implement without unplayable performance - AI especially is something that spends a lot of time sequentially cross comparing static data. You could massively increase the quality and realism of things like dynamic way/path finding without bringing performance to its knees.
 
Sure but it does make anything I say a little more likely than purely wild unsub'd speculation...

Its not just about performance increases (tho if you could bring GPGPU properly to bare on some tasks you would see more like 400+% gains) its about all the things you could implement without unplayable performance - AI especially is something that spends a lot of time sequentially cross comparing static data. You could massively increase the quality and realism of things like dynamic way/path finding without bringing performance to its knees.
why hasn't this been done yet then, seams nvidia has a lot of clout,if it could have been done nvidia would have paid through the nose for it just to show what cuda can do.
 
Last edited:
Sure but it does make anything I say a little more likely than purely wild unsub'd speculation...

No it does not make what you say more likely.
I have seen things like this many times over the decade that result in absolutely nothing.


Its not just about performance increases (tho if you could bring GPGPU properly to bare on some tasks you would see more like 400+% gains) its about all the things you could implement without unplayable performance - AI especially is something that spends a lot of time sequentially cross comparing static data. You could massively increase the quality and realism of things like dynamic way/path finding without bringing performance to its knees.

When its here then i will care & it will not be here if its locked to one gfx maker & will have no more impact on gaming than Physx.
PhysX: What Readers Think http://www.anandtech.com/video/showdoc.aspx?i=3558
 
Last edited:
If cuda had big gains in games i think nvidia would do well out of it,but the point am saying is if they can do this with cuda they would have done it by now.
 
If cuda had big gains in games i think nvidia would do well out of it,but the point am saying is if they can do this with cuda they would have done it by now.

I'm not sure about that - I dare say that games publishers wouldn't be to happy at being accused of massivly optimising the game for one GPU architecture, and as a result, be accused of nobbling, in this case, ATI users with a 'restricted' game.

I suspect this is why CUDA hasn't been seen in games much, at least not to any really noticeable effect - it might be good for Nvidia users, but it would be bad for the publishers, who really dont' care what hardware their game runs on, as long as it sells so they can recoup development costs. They can't do that by lockling out half of their potential audience!

OpenCL/DirectCompute allow a way to get around that though, by being hardware agnostic - less of a problem for publishers, and hopefully, games coders [I know a couple - sounds like a nasty job], who don't have to attend a vendor sponsored training course to get stuff working.

FWIW I'm not one of these people who think that CUDA is evil evil evil - it was, rather like the Ford Model T, the one that brought GPGPU to mainstream attention and made it practically usable. Being first, however, doesn't make it worthier than any other solution. Unless you are a Ford man, natch.

The wider adoption that OpenCL will naturally get as a reult of not being vendor locked should accelerate it's development, after all, eveyrone wants their code to run faster, and now they don't have to get specialised hardware to do it - with OpenCL, any recent GPU can be used to run GPGPU tasks. This is not a bad thing in the slightest.
 
If cuda had big gains in games i think nvidia would do well out of it,but the point am saying is if they can do this with cuda they would have done it by now.

Much of it comes down to people not wanting to lock out a good slice of their potential audience...

It works and works very well... a single 128SP G92 core is capable of running such useage at between 2.5 and 60x faster than the fastest quad core CPUs available at the time of the study which I believe was the QX9650.
 
Am I the only to sit here with jaw wide open at this? Thats a stunning difference.

Now imagine what it would be like if they implemented the physx stuff that affects gameplay, etc. and can't be included because to disable it would mean 4fps on ATI hardware running it on the CPU...
 
Now imagine what it would be like if they implemented the physx stuff that affects gameplay, etc. and can't be included because to disable it would mean 4fps on ATI hardware running it on the CPU...

So lets use this instead that runs on all hardware.

The Physics in Ghostbuster is powered by Velocity Physics as part of the Infernal Engine.
1500 Boxes and 200 Ragdolls in the scene.
http://www.viddler.com/explore/HardOCP/videos/36/
 
Thats a highly opptimised solver for some fairly primitive RB interactions... quite impressive how many boxes it can chuck around... but it wouldn't do much else...
 
Back
Top Bottom