• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Asteroids Demo

LOD by distance is new now?

Also when did this require a special GPU architecture?

You just need a CPU to select the right draw calls.
 
LOD by distance is new now?

Also when did this require a special GPU architecture?

You just need a CPU to select the right draw calls.

I think this might be the bit you are missing;
Prior to the arrival of the Turing architecture, GPUs would be forced to cull every triangle individually, creating massive workloads on both the GPU and CPU.

By combining together efficient GPU culling and LOD techniques, we decrease the number of triangles drawn by several orders of magnitude, retaining only those necessary to maintain a very high level of image fidelity. The real-time drawn triangle counters can be seen in the lower corner of the screen. Mesh shaders make it possible to implement extremely efficient solutions that can be targeted specifically to the content being rendered.

Tessellation is not used at all in the demo, and all objects, including the millions of particles, are taking advantage of Mesh Shading.

so no, they aren't saying its completely new, they are saying this is a much more efficient way of doing the same thing
 
I think this might be the bit you are missing;


so no, they aren't saying its completely new, they are saying this is a much more efficient way of doing the same thing

Well their comparison was against a BS 3.5 trillion number. So if they really want to demonstrate how about use decent aggressive culling and compare against that.

You really think games like GTA V aren't using aggressive culling? Also their demo is nothing like a real world game like GTA V. Note GTA V doesn't need special GPU hardware, because a good CPU works fine.

Nvidia want to lock people into their GPU ecosystem for no reason. I hope they fail.
 
What about them talking about optimising for their hardware locks people in to their GPU ecosystem? AMD are prevented from releasing a similar optimisation how?

It's like PhysX. I don't think this does anything particularly well that CPU based culling can't do. Nvidia have put in this extra unnecessary hardware in GPUs and they are inventing use cases.

Raytracing and denoising, fine. However different types of shading losely related to the tensor cores is just silly when CPUs are becoming multicore beasts these days.

DLSS as good as native 4K? No, so why not have more normal shaders to achieve native 4K.
 
OFC they want to lock people in to their ecosystem, they think they are Apple. Look at gsync, does the same thing as freesync but costs £200...
 
It's like PhysX. I don't think this does anything particularly well that CPU based culling can't do. Nvidia have put in this extra unnecessary hardware in GPUs and they are inventing use cases.

Raytracing and denoising, fine. However different types of shading losely related to the tensor cores is just silly when CPUs are becoming multicore beasts these days.

DLSS as good as native 4K? No, so why not have more normal shaders to achieve native 4K.

Well there is another vendor who has been plugging away at the "change nothing just add more shaders" approach and it hasn't really panned out that well in terms of producing the fastest gpu for a particular power budget.
 
With this the CPU no longer needs to.

Also, this guy seems to think it's a good thing, but I'm not sure what the ex-Ubisoft senior rendering lead would know.
https://twitter.com/SebAaltonen/status/1063153928857104384

Lol, so instead of general purpose CPUs everyone has anyway (and becoming far more powerful for the same money these days), let's use £1k GPUs to achieve a similar goal.

Like I said, PhysX version 2 is the path nvidia are taking. That way once game developers code their games to use nvidia specific shading code, you are locked in.

Gsync is just another example of this. Instead of an open standard which does the job, lets add £200 to all monitors for our proprietary version.
 
Last edited:
Well that's just it isn't it, it's not a similar goal, the GPU is far faster and much more efficient at the task. Which frees up not only the CPU, but reduces the load on the limited PCIE bandwidth. You really do not want to be going across that bus to the CPU and main memory for such an integral part of your engine if you don't have to.

It all depends on how easy it is to implement, and whether Nvidia can get support in DX and Mantle.
 
Did anyone ever find a link for this that wasn’t behind the Nvidia Dev login?

Or can anyone sign up and download it?
 
Back
Top Bottom