• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA DirectX 11 Questionnaire – A Response to AMD’s Views

I popped around to a mate who has the 3D nVidia goggles (shutter thingies). It's pretty good as a demonstration (had a look at Res Evil 5 which is a 'designed for 3D' game). Still think it needs another generation or two before it addresses some of the problems (120Hz screen, high brightness required, sheen on the gaming models etc).
I don't game enough to warrant the cost up high powered graphics cards + screen (+ PC upgrade to match!) as I play QW:ET one every month or two.. but I did find out the the Guitar Hero 5 drum kit has a MIDI IN (vDrums!)
 
I went to the 3D gaming event yesterday and there was an nVidia representative there bigging up nVidia as much as he could.

He was nice guy and all, before he started the nVidia script.

He was raging about CUDA and how great it is so I asked him, "well isn't CUDA somewhat obsolete once OpenCL and DirectCompute are out?"

His response was "actually, that's a very good point, but you still need CUDA approved hardware to run OpenCL and DirectCompte.

I asked him why he said this because OpenCL and DC are designed to be open standards that are to run on any hardware.

He had a mini rant about how ATi haven't brought out any drivers supporting them while nVidia has been supporting them since early 2009, which I said is irrelevant due to windows 7 not being official yet, there's no reason to be boasting about supporting them before Win 7 has been released.

Can i get an OWNED :D
 
Can i get an OWNED :D

OWNED.gif
 
I agree It looks a bit naff.

Hardware just isn't yet powerful enough to do the calculations required. Its the same with dx11's tessellation, then use of displacement mapping instead of parallax/bump. It's all good in theory, however It's just too demanding for it to work well with this new release of cards both red or green.

surely it would be better to do physics calculations on the CPU instead of the GFX card.... It's not as if massive amounts of RAM bandwidth is needed...
 
surely it would be better to do physics calculations on the CPU instead of the GFX card.... It's not as if massive amounts of RAM bandwidth is needed...

Physics calcs are massively parallel, so a GPU doing them makes the most sense.

Same with anything that excels using stream shaders.
 
Can i get an OWNED :D

Funniest thing was is that I didn't once mention ATi my self without his prior mentioning of them, yet he seemed to respond to me with "well we're still better than ATi though".

He kept trying to say that ATi has streamcompute and brooke+ which I said to him a few times, "I don't agree with stream either or brooke+, I'm looking forward to an open stream computing standard that will run on any GPU."

To which he replied "Fermi's gonna be a monster GPU."

and had another rant about how nVidia technology is far better than ATi, "have you seen folding at home? That flies on nVidia hardware, that itself is a demonstration that our architecture is far superior to ATi's."

He wouldn't have it when I told him that F@H hasn't been optimised for the latest ATi cards and that if nVidia's tech was so much better, why wouldn't it be apparant in games with some sort of giant performance difference.

I don't think he'd really been "trained" to reply to questions other than "oh wow, nVidia is so cool."

I asked him a few questions about CAD usage of GPUs and he didn't have much of a clue at all.

I had a nice talk with him and he was a nice guy, but he was definitely reciting a script.
 
Wait I just noticed in the nvidia 'body hits wall' demo, the parts of the destroyed wall quite obviously get culled (when you can see the back of that one wall, you can quite clearly see some pieces disappearing) to keep the number of simulated objects down. How lame. :\
 
Wait I just noticed in the nvidia 'body hits wall' demo, the parts of the destroyed wall quite obviously get culled (when you can see the back of that one wall, you can quite clearly see some pieces disappearing) to keep the number of simulated objects down. How lame. :\

That's pretty poor, especially as that demo's supposed to be demoing the power of fermi?

I'm still convince fermi doesn't exist outisde of paper specs to be honest.
 
Sounds like they let the lamb in amongst the wolves :S if they are gonna send a rep in amongst enthusiasts you'd think they'd send someone who knew what they were talking about.

To be honest, while I was there, there weren't many people talking to him.

Ziip Carot was there while I was talking to the nVidia guy, but he was moving about a bit.

Most of my talk was one-to-one with him.
 
why wouldn't it be apparant in games with some sort of giant performance difference.

You should've been all, 'well stuff that's actually useful is faster on ATi GPU's, like video transcoding (guru3d) and cracking my neighbour's wifi encryption (elcomsoft). Who cares about staring at some silly proteins?' :p

Of course I'm kidding, I understand F@H is a useful tool for medical advancement, but it'd be fun to see his reaction.
 
Funniest thing was is that I didn't once mention ATi my self without his prior mentioning of them, yet he seemed to respond to me with "well we're still better than ATi though".

He kept trying to say that ATi has streamcompute and brooke+ which I said to him a few times, "I don't agree with stream either or brooke+, I'm looking forward to an open stream computing standard that will run on any GPU."

To which he replied "Fermi's gonna be a monster GPU."

and had another rant about how nVidia technology is far better than ATi, "have you seen folding at home? That flies on nVidia hardware, that itself is a demonstration that our architecture is far superior to ATi's."

He wouldn't have it when I told him that F@H hasn't been optimised for the latest ATi cards and that if nVidia's tech was so much better, why wouldn't it be apparant in games with some sort of giant performance difference.

I don't think he'd really been "trained" to reply to questions other than "oh wow, nVidia is so cool."

I asked him a few questions about CAD usage of GPUs and he didn't have much of a clue at all.

I had a nice talk with him and he was a nice guy, but he was definitely reciting a script.

Oh dear, Kyle you absoloute legend.
 
Physics calcs are massively parallel, so a GPU doing them makes the most sense.

Same with anything that excels using stream shaders.
Now quad core CPUs are more common, then surely parallel calculations are also a good thing for quad and upcoming hex core CPU's to deal with also?

I guess it would be nice to have the option for both the CPU and GPU to do physics stuff, maybe even CPU and GPU doing physics calculations at the same time... but I guess we'll have to see how it all pans out. :)
 
Oh dear, Kyle you absoloute legend.

:D

He was definitely script talking. When I mentioned CAD and directing the rendering calls to the CPU when using "Photorealistic" rendering applications, he said "oh yeah, Quadro cards are good for that".

He didn't seem to understand that quadro cards aren't good for "rendering". I explained to him that I'm an architectural technician and I deal with renders of CAD models.

Took me a few goes to help him get that I wasn't talking about viewport 3D rendered image in the likes of 3DSMax etc.

He then started telling me about nVidia's real time raytracing demo and how their cards are good for that.

Their raytracing is pants, it might be raytracing, but quality of it is absolute poo.

I told him this of course, any real time raytracing currently is rather poo and that real time isn't what I'm looking for.

I still don't think he actually understood that I didn't mean rendering of interactive 3D content.

He seemed to think that there was an nVidia card out there and software that could do what I'm asking.

QuickTest.png


That is the type of "render" I'm talking about, this image took a few minutes to render over 8 cores between my 2 computers.

I was saying to him that shifting the workload to the GPU should surely show some big gains over using just a CPU, but he still didn't get what I mean and resulted in just saying "well I don't know anything about Quadros, they're not my thing, I just know that they're good for CAD."

Personally, I don't see any point in FireGLs or Quadros for what I personaly do.

To be honest, with nVidia having just bought out Mental Ray, I would have thought he'd have a bit more understanding of what I was talking about.

nVidia buying Mental Ray to me says that they're doing R&D on rendering on the GPU using compute shaders.
 
Now quad core CPUs are more common, then surely parallel calculations are also a good thing for quad and upcoming hex core CPU's to deal with also?

I guess it would be nice to have the option for both the CPU and GPU to do physics stuff, maybe even CPU and GPU doing physics calculations at the same time... but I guess we'll have to see how it all pans out. :)

GPUs still outstrip a CPU quite a bit.

CPUs are good for linear tasks, that's what they excel in. Quadcore CPUs are good for doing linear tasks in parallel not massively parallel calculations.

So while CPUs can be faster than a GPU at some things, there's a reason why GPUs are needed for games, a CPU is simply too inefficient to perform the calculations required to run the current games at high res and high FPS.

Multi cored CPUs are basically a work around for CPUs being poor at doing things in parallel.

It's a nice work around of course, but it's not the equivalent of how a fast GPU works.
 
Gotcha, now I understand why GFX cards have a lot of memory bandwidth, as parallel type things must process an awful lot of data. :)

With any luck, there will be more apps which can harness the power of these GPU's, I think even processing of sound could be done... This would allow the use of more plug-ins for programs likeCubase for example, but I think that is a long way off right now...
 
Gotcha, now I understand why GFX cards have a lot of memory bandwidth, as parallel type things must process an awful lot of data. :)

With any luck, there will be more apps which can harness the power of these GPU's, I think even processing of sound could be done... This would allow the use of more plug-ins for programs likeCubase for example, but I think that is a long way off right now...

Exactly, that is how it should hopefully end up.

Oh, and in addition to what the nVidia guy was saying, I heard him saying something else along the lines of "you'd need two i7 CPUs to match the upscalling performance of our ion net top."

Which is most definitely not true, I really did think he wasn't too sure about the technology he was demonstrating really.

He was really bigging up the ion platform and kept going "yeah, wait until I turn GPU acceleration off in this Star Trek 1080p clip, it's going to get all choppy look horrible and be completely un-watch-able."

Well he turned it off and it took quite some time for it to get choppy and when it did it was still in the realm of watch-able.

It was impressive none the less but I wish people wouldn't big things up so much, the had to check 2-3 times because he didn't know if he'd actually turned GPU acceleration off.
 
Last edited:
I popped around to a mate who has the 3D nVidia goggles (shutter thingies). It's pretty good as a demonstration (had a look at Res Evil 5 which is a 'designed for 3D' game). Still think it needs another generation or two before it addresses some of the problems (120Hz screen, high brightness required, sheen on the gaming models etc).
I don't game enough to warrant the cost up high powered graphics cards + screen (+ PC upgrade to match!) as I play QW:ET one every month or two.. but I did find out the the Guitar Hero 5 drum kit has a MIDI IN (vDrums!)

I thought the 3D was great, in batman it definitely added to the game even if I did get a headache after 5 minutes.

Need for speed shift was horrendous for me, It gave me a really bad headache from the start, the dashboard and HUD were very difficult to look at, there seemed to be 2 of them and they were offset from each other, like the 3D wasn't working properly.

The 3D camera was nice too, it gave the video that missing "thing" that is the difference between knowing something's displayed on a flat surface and seeing something with depth.

Objects in the foreground really messed with my eyes though and I had to take the glasses off after about a minute.
 
Typical Nvidia bitching tbh they have been beaten to it again and come up with this rubbish if it was them who had released dx 11 first they would be pushing it every chance they had.
 
Back
Top Bottom