Tensor cores are used for denoising ray tracing produced by rt cores.
My understanding from having read the technical whitepapers and watching several NVidia presentations, is that RT denoising is not AI based and does not need to use the tensor cores.
"Denoising for real time applications is using an algorithm (cross-bilateral filtered denoising) and not AI (like NVIDIA's OptiX Ray Tracing Engine does)."
Here's some resources:
https://www.gdcvault.com/play/1024813/
https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/
What a tensor core does is a fused matrix multiplication add operation (https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html#tensorop). Basically, it does this entire operation of A x B + C = D in a single clock cycle instead of 8 cycles using normal CUDA cores. (A, B, C, D are matrices, A and B are fp16). This is extensively used in convolutional neural network which is the backbone of AI.
I suppose the bilateral filter that's used in RT denoising could be written to harness the tensor cores, but there's not really any particular reason to.
Do you believe DLSS would be able to be handled also at same time?
Unless there is some other evidence I've missed that would suggest otherwise, yes I believe that DLSS and RT can be handled simultaneously. In fact, I have every reason to suspect they were designed to do just that, so as to soften the burden of RT at higher resolutions.