Unified memory won't suddenly massively increase texture sizes in game, as in beyond current pc numbers already, because of resolution and inefficiency, massively higher texture sizes wouldn't increase picture quality if the size is already good enough for 1080p. PC's are at where they need to be, consoles have been miles and miles behind and will jump forward but only to match the pc.
Unified memory is really nothing to do with increasing total memory together, 512mb to 8gb was that, the unified memory part allows for significantly higher efficiency, huge reduction in generally worthless repeat copying and deleting/updating data.
In terms of memory, the XB1 reserves at least 3gb, 1gb/1 core for each OS afaik. The PS4 is said to be reserving 2.5gb or so for the OS with 1gb as yet reserved and not usable by games. If PS4 add a buttload of features in the next say 3 years which takes the OS memory requirements well beyond 2.5gb, then if games are made today that use precisely 5.5gb, then in 3 years when the OS needs more memory the game will have problems running. They can(and likely will) release most of that memory for games in the future.
Modern gpu's on pc's will not under any circumstances suddenly require 8gb to run multiplatform titles, I would be surprised if they all didn't run on 2gb gpu's without a problem(at the same settings/res), maybe much less than that.
In terms of memory, it doesn't have to be in the same place to be unified, it just needs to be all accessible to each other. HSA features unified memory, Kaveri was supposed to be the first desktop HSA parts... but no one really knows how that will work as yet. Only the APU's, or will it work with discrete gpu's, will it only work well with discrete AMD gpu's... quite likely, not because they would lock out Nvidia but because the parts would need to be HSA compatible, there are MANY companies working towards HSA compatibility, Apple, Arm, AMD.... I honestly can't remember but I don't think Intel or Nvidia have said they will do HSA compatible parts.
Nvidia has been working on their own ways to heavily cut down the amount of overhead, I forget the name of their project and I think its a PCI-e based way of doing it.
http://on-demand.gputechconf.com/gtc-express/2011/presentations/cuda_webinars_GPUDirect_uva.pdf
If you look at the first couple of pictures it kind of illistrates one of the advantages. Firstly unified memory is like on the right of the first slide, its combining all memory and letting everything see it as one pool. Then in the second slide... though not well illistrated, right now to copy data from GPU 0 memory to GPU 1 memory, it has to send the data to the cpu, then to the gpu, interupting the cpu with basically needless work and getting in the way of other traffic. With unified memory style systems you can copy the data directly from gpu 0 memory to gpu 1 memory. Its noticeably quicker and has a FAR lower cpu overhead, effectively none, meaning the transfer is much quicker and whatever the cpu is doing is effectively sped up.
In terms of bandwidth and ddr3/4 vs gddr5, well you've got power to consider, gddr5 uses significantly more than ddr3, you've got latency to consider, they aren't as far apart as people think but ddr3 is lower latency. Different access does best with different types of memory. GPU data requires exceptionally high bandwidth and latency isn't much of a concern so gddr5 is tuned for high latency very high bandwidth. CPU data usually requires low latency and doesn't need an awful lot of bandwidth, as such ddr3/4 are tuned for lower latency and not huge bandwidth. Having different types of memory for different types of device still makes sense, and local memory for different devices will always be magnitudes faster than memory further away.