It is like people complaining because turning on hardware physics reduces framerate...
Sure you don't "need" RT cores to do ray tracing but something along those lines is the only way to accomplish it as things stand - doing it on general compute cores is around 6 times slower and the approach that Crytek have used while admirable and would have been a big thing even 3-5 years ago (when something even close to what they are doing with RTX in realtime was a complete fantasy) is ultimately a dead end involving a lot of special case optimisation compared to "pure" ray tracing techniques.
The actual API support for it in DXR and Vulkan's RT extensions don't care what hardware is underneath - you have bunch of functions an application developer can invoke with their input data and the API basically tells the drivers go away and get me results for this without doing anything that locks it to RTX hardware - it can even be run on Pascal's shaders as demonstrated with Quake 2 but they just aren't capable of the performance needed. Nothing stopping AMD doing similar...
Not aimed at you but I'm getting a bit bored with the same tedious negative responses that are trotted out by people because they either don't understand what is going on and/or because AMD can't do it yet - when you actually notice the techniques in action in Quake 2 RTX and understand how that can be applied to more modern applications it is almost mind blowing that we are pretty much there now not still waiting for another 10 years into the future.
From what I can make out although Tensor cores could be used and are part of the solution in OptiX it seems most applications are using a variation of spatiotemporal variance guided filtering that runs on the compute shaders for denoising - while it could be accelerated on Tensor cores significantly it apparently results in an overall contention for resources on the GPU which needs to be hand tuned to avoid and potentially being trained for the task to get best results which developers don't want to spend time on. Although there was talk of using the Tensor cores to optimise some parts of the BVH process (I assume by using machine learning techniques) again from what I can see there is no such functionality currently active in any of the games I have access to the source of.
The new wolfenstien uses a mixture of Tensor cores and CUDA for the de-noising in some in of hybrid of DLSS and spatiotemporal filtering.
One thing people tend to ignore is that the Tensor cores are used for fast FP16 packed math operations. The non-RTX uring cards end up with additional FP16 CUDA cores. Using Tensor cores has some advantages and disadvantages.
The other thing is both RTX + Tensor cores combined add up to about 8-9% of the entire Turing GPU die. Their actual transitor cost is pretty minimal. All the talk of big Turing dies has very little to do with RTX or tensor cores. This also means that significant RTX performance can be found by dedicating mroe transistor budget to it on 7nm. RTX is currently about 3-4% of the die area, one could happily expand that to 8% without adding significant cost. The RTX cores are simple ray intersection test accelerators and have a very low complexity. PowerVR was doing this like 2 decades ago.
The complex part of RTX is in the dynamic BVH, which is all done on CUDA int32 cores.