Seen this style of comment in a few places. It's basically getting things the wrong way round - multi-core in CPUs is effectively doing what has been in place for GPUs for ages. GPUs have been massively parallel for ages (thousands of cores!

). However, multi-GPU doesn't refer to this (as it's the normal base-case) but rather to having separate physical GPUs served by separate resources etc. More akin to dual-socket server CPUs but where RAM is not a shared pool but rather a smaller pool for each.
Perhaps we could in future see a few chips on a board as a way of getting more transistors without yield issues, though if this were to happen in this way it'd be more akin to one large GPU anyway in terms of operation, and I suspect we're not near the point it makes sense yet as it adds a whole ton of complexity (thus cost) and will require different architectures to handle the varying latency. It may never happen.
Edit: I realise some people are saying this is what's coming in Navi - but even if it is, it's still not "like CPUs" nor will it effectively be multi-GPU from a developers standpoint.
DX12 is looking the other way - how to get lots of GPUs as they exist now to work together better. This involves much more difficult problem-solving on the software side and probably is unlikely to really be adopted heavily in most games any time soon, as significant extra complexity means significant extra cost which for a few users benefit is a poor trade off. Perhaps in future it'll become more standard and more games engines etc will make it easier for games devs. (Abstraction/tooling are very helpful in gaining mass adoption, despite what you may have read about low-level access being some kind of magic bullet)