Associate
I did say the tech is in its infancy. When it's better there's no reason it couldn't be used to improve audio. Anyway you don't need to simulate all audio sources. Only some could improve immersion a lot, but yes it would use a lot of resources to do and is far more complicated than light.You can and do get noise over USB too introduced by components on the mobo. It's less of an issue on higher end boards but can happen still depending on the quality of components the mobo maker has chosen, the quality of the PSU and suchlike. That's why people can hear pops and crackles on USB at times, but not via optical. Again, totally depends on the number of devices, quality of the USB controller, quality of surrounding components feeding off the same power etc etc.
My previous mobo had dedicated USB ports designed to be used by DACs and other audio devices, it was called the AMP-UP USB port, isolated from the rest of the mobo so could not get any interference from anything else. Not all boards have this type of setup.
If all is well, then 99% of setups should see zero audible difference between USB and optical out from the mobo.
This is not accurate, look at previous cards vs current cards, more RT cores has meant better fps in RT situations. The CPU isn't that relevant when playing at 1440p or above as evidenced by every screenshot showing an RTSS overlay, the GPU is doing all the work, the CPU just has to pass along the next frame instruction which isn't that much of a task. As an example a 12600K or greater CPU will still smash through path tracing on a high end GPU with ease, the fps difference will be there but it won't be huge as the bulk of the workload is being done by the GPU.
If the resolution is at one that is CPU bound like 1080p, then this all changes, again evidenced by any screenshot or video showing an RTSS overlay.