Associate
- Joined
- 26 Aug 2016
- Posts
- 561
By the time SC releases, we'll all be well beyond Turing.^^ The real reason it won't effect when Star Citizen releases.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
By the time SC releases, we'll all be well beyond Turing.^^ The real reason it won't effect when Star Citizen releases.
Signed up for dev account and downloaded, will give it go later![]()
LOD by distance is new now?
Also when did this require a special GPU architecture?
You just need a CPU to select the right draw calls.
Prior to the arrival of the Turing architecture, GPUs would be forced to cull every triangle individually, creating massive workloads on both the GPU and CPU.
By combining together efficient GPU culling and LOD techniques, we decrease the number of triangles drawn by several orders of magnitude, retaining only those necessary to maintain a very high level of image fidelity. The real-time drawn triangle counters can be seen in the lower corner of the screen. Mesh shaders make it possible to implement extremely efficient solutions that can be targeted specifically to the content being rendered.
Tessellation is not used at all in the demo, and all objects, including the millions of particles, are taking advantage of Mesh Shading.
I think this might be the bit you are missing;
so no, they aren't saying its completely new, they are saying this is a much more efficient way of doing the same thing
What about them talking about optimising for their hardware locks people in to their GPU ecosystem? AMD are prevented from releasing a similar optimisation how?
It's like PhysX. I don't think this does anything particularly well that CPU based culling can't do. Nvidia have put in this extra unnecessary hardware in GPUs and they are inventing use cases.
Raytracing and denoising, fine. However different types of shading losely related to the tensor cores is just silly when CPUs are becoming multicore beasts these days.
DLSS as good as native 4K? No, so why not have more normal shaders to achieve native 4K.
With this the CPU no longer needs to.LOD by distance is new now?
Also when did this require a special GPU architecture?
You just need a CPU to select the right draw calls.
With this the CPU no longer needs to.
Also, this guy seems to think it's a good thing, but I'm not sure what the ex-Ubisoft senior rendering lead would know.
https://twitter.com/SebAaltonen/status/1063153928857104384
i just signed up and got it