Because they're not up to par with Intel or Apple due to memory bandwidth limitations.
You can see on SPECint2017 (the average score), 5950X with 16 cores is tied to 12900K in MT while more or less tied in ST and having more cores. And in SPECfp2017 it actually falls behind 12900K (and M1 Max) noticeably. Intel and Apple would likely scaled just as poorly if they were on DDR4 (indeed you can see 12900K DDR4 doesn't scale well either).
This is because each DDR5 channel is really two 32-bit channels instead of one 64-bit channel (as it was in DDR4), and each bank can operate independently of each other, doubling burst length (e.g. two 64-byte ops instead of one during the same time) and getting much better efficiency at utilising the bandwidth. Like this isn't an issue in Threadripper/EPYC as they can get more channels, but a serious issue limiting usecases of 5950X for workstations doing certain workloads. Seriously dual-channel DDR4 was not enough for 16 cores of Zen 3.
This doesn't show itself in gaming or rendering (those never saturate memory bandwidth and are more sensitive to latency), so the limitation isn't visible on gaming/blender/cinebench tests, but it is quite visible in SPEC lbm, cam4, fotonik, roms, etc... These are more representative of scientific/financial compute workloads, code compiling, particle simulations, monte-carlo simulations, etc...
Once AMD goes to DDR5 they will very likely scale as well as Intel and Apple (as AMD does scale well in the Threadripper/EPYC range, so no architectural limit there), and their MT FP performance would blow Intel out of the water given the # of core advantage.