Yes they did and they call it
Solid Stage Graphics (SSG), but I think that has nothing to do with the so called 'next-gen' memory.
The problem there is bandwidth and how GDDR and HBM will reach their efficiency limits (albeit HBM will do that at higher data rates than GDDR).
Vega has an interesting memory controller that can manage 512TB of virtual address space, which makes it possible to integrate with system RAM, add-on SSDs, or even the network (!) storage via a unified memory model. They're actually calling the VRAM on Vega a
memory cache. This all fits in with their ROCm toolchain that seamlessly manages memory using common pointers and their aim of building an HSA solution.
On one hand Vega will pack interesting tech on the memory front, on the other hand this all costs (in die space and efficiency) just lie ACEs do. So there is a handicap there.
In the end though, if they manage to get to Navi through this avenue (supposedly using multiple GPU chips on the same board working seamlessly as a single GPU) then it'll be worth it I guess. Once we hit 7nm we'll stay there for a while. We can't expect architectural improvements year-on-year so multi-chip is the apparent way to scale...
EDIT: If we're lucky, maybe dual-chip Vega will be a 'testbed' for Navi with just 2 chips that can work individually, OR combined as one. There's interesting stuff on the software side there: when the 2 chips are working as one, they may be using alternate frame rendering (which is common SLI/CrossFire) but I can't see how that would be 'transparent', OR they may be rendering different portions of the screen, as they're using the common memory controller to get to the VRAM that operates as a 'cache' and takes care of coherency issues. In the latter case whichever chip finishes rendering last (the two halves of the screen can have different complexity) would be responsible for signalling the ROP to dish out the frame and apply the adaptive sync protocol messages.