Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
I went to the 3D gaming event yesterday and there was an nVidia representative there bigging up nVidia as much as he could.
He was nice guy and all, before he started the nVidia script.
He was raging about CUDA and how great it is so I asked him, "well isn't CUDA somewhat obsolete once OpenCL and DirectCompute are out?"
His response was "actually, that's a very good point, but you still need CUDA approved hardware to run OpenCL and DirectCompte.
I asked him why he said this because OpenCL and DC are designed to be open standards that are to run on any hardware.
He had a mini rant about how ATi haven't brought out any drivers supporting them while nVidia has been supporting them since early 2009, which I said is irrelevant due to windows 7 not being official yet, there's no reason to be boasting about supporting them before Win 7 has been released.
I agree It looks a bit naff.
Hardware just isn't yet powerful enough to do the calculations required. Its the same with dx11's tessellation, then use of displacement mapping instead of parallax/bump. It's all good in theory, however It's just too demanding for it to work well with this new release of cards both red or green.
surely it would be better to do physics calculations on the CPU instead of the GFX card.... It's not as if massive amounts of RAM bandwidth is needed...
Can i get an OWNED![]()
Wait I just noticed in the nvidia 'body hits wall' demo, the parts of the destroyed wall quite obviously get culled (when you can see the back of that one wall, you can quite clearly see some pieces disappearing) to keep the number of simulated objects down. How lame. :\
Sounds like they let the lamb in amongst the wolves :S if they are gonna send a rep in amongst enthusiasts you'd think they'd send someone who knew what they were talking about.
why wouldn't it be apparant in games with some sort of giant performance difference.
Funniest thing was is that I didn't once mention ATi my self without his prior mentioning of them, yet he seemed to respond to me with "well we're still better than ATi though".
He kept trying to say that ATi has streamcompute and brooke+ which I said to him a few times, "I don't agree with stream either or brooke+, I'm looking forward to an open stream computing standard that will run on any GPU."
To which he replied "Fermi's gonna be a monster GPU."
and had another rant about how nVidia technology is far better than ATi, "have you seen folding at home? That flies on nVidia hardware, that itself is a demonstration that our architecture is far superior to ATi's."
He wouldn't have it when I told him that F@H hasn't been optimised for the latest ATi cards and that if nVidia's tech was so much better, why wouldn't it be apparant in games with some sort of giant performance difference.
I don't think he'd really been "trained" to reply to questions other than "oh wow, nVidia is so cool."
I asked him a few questions about CAD usage of GPUs and he didn't have much of a clue at all.
I had a nice talk with him and he was a nice guy, but he was definitely reciting a script.
Now quad core CPUs are more common, then surely parallel calculations are also a good thing for quad and upcoming hex core CPU's to deal with also?Physics calcs are massively parallel, so a GPU doing them makes the most sense.
Same with anything that excels using stream shaders.
Oh dear, Kyle you absoloute legend.
Now quad core CPUs are more common, then surely parallel calculations are also a good thing for quad and upcoming hex core CPU's to deal with also?
I guess it would be nice to have the option for both the CPU and GPU to do physics stuff, maybe even CPU and GPU doing physics calculations at the same time... but I guess we'll have to see how it all pans out.![]()
Gotcha, now I understand why GFX cards have a lot of memory bandwidth, as parallel type things must process an awful lot of data.![]()
With any luck, there will be more apps which can harness the power of these GPU's, I think even processing of sound could be done... This would allow the use of more plug-ins for programs likeCubase for example, but I think that is a long way off right now...
I popped around to a mate who has the 3D nVidia goggles (shutter thingies). It's pretty good as a demonstration (had a look at Res Evil 5 which is a 'designed for 3D' game). Still think it needs another generation or two before it addresses some of the problems (120Hz screen, high brightness required, sheen on the gaming models etc).
I don't game enough to warrant the cost up high powered graphics cards + screen (+ PC upgrade to match!) as I play QW:ET one every month or two.. but I did find out the the Guitar Hero 5 drum kit has a MIDI IN (vDrums!)