...but advances in nodes are too small to cover that so power consumption will be massive.
I don't understand what you're saying. Smaller nodes generally mean lower power consumption. MCM design also doesn't mean massive power consumption.
Like I said before, AMD have CPUs which link together 9 chiplets and don't break 300W. That's because they are very proficient at creating an interconnect that doesn't require lots of power. In fact, the biggest power draw in Zen 2 and Zen 3 is the IO die which still uses GloFo 12nm. Zen 4 will have TSMC 5nm chiplets and a TSMC 7nm IO die, that alone will see a massive power
reduction in the overall design.
Intel, however, have significant issues keeping their power requirements down. That is not a failing of the technology, that is a failing of Intel's implementation.
MCM for graphics cards will be exactly the same. AMD can make great interconnects, so theirs won't draw much power. If AMD have a low power interconnect, but Nvidia have a high power one, then the failing is Nvidia, not MCM. If Nvidia and Intel have low power interconnects in their graphics cards, and AMD as a high power one, the failing is AMD's, not MCM.
And don't think the current power requirements of graphics cards is representative of where things are going, they're not. This is a blip born from Nvidia's arrogance. If Nvidia played the game a little nicer they would've had Ampere gaming cards built at TSMC, and the power requirements would be much lower. Instead, they screwed themselves over and got lumped into using Samsung's objectively worse 10nm node. That gave AMD the favourable optics to push RDNA 2 to be more competitive at the expense of power requirements higher than perhaps originally planned.
Nvidia's Ampere isn't hot and hungry because the design says so, it's hot and hungry because of how it's made. Just like Intel's Rocket Lake too. This doesn't mean everything is always going to be hot and hungry needing 1000W PSUs in the future.