I'll be receptive to you providing a source explaining that. Otherwise, WTF? You saying VRAM only works because it's daisy chained like dual-channel/quad-channel system RAM (but more so), and where memory bus width is everything and capacity is an afterthought? Well outside my know-how if so.
Also, if AMD want their maximum bit memory interface and requires x number of chips to do so, then they could presumably do it with 4 GB / 6 GB cards with smaller capacity memory chips in the exciting new era where nVidia offer only 6GB cards under the £500 price tag. Shame on AMD doing 8GB at £150 with RX570s to Vega 64s at £400+ with 8 GB of wasteful memory when 6 GB will apparently do.
it is the very basics of how memory works. GDDR has a 32bit bus width, so that each VRAM chip has 32bit memory lanes. If your GPU need the bandwidth afforded by a 256bit bus then you need 8 VRAM chip. If you only need 192bit then you only need 6 VRAM chips. Chips have different densities, at the moment 1GB is fairly standard and affordable, I'm not sure what other sizes are available if any. Thus the 2060 end up with 6x1GB, and the RX580 with 8x1GB.
For nvidia to increase the Vram on the 2060 then they could have some GDDR6 chips share a single 32bit bus. But this means these chips would have to share memory bandwidth. Nvidia have done this in the past with lower cards. However, Just look how much fuss Nvidia got into with the 970 and its memory configuration when some 0.5GB memory was slower. That huge outcry over nothing has pretty much destined the 2060 not to have 8Gb VRAm with such an arrangement.
The alternative is to create a 256bit memory interface, which adds to the die size and production costs. This is basically what the 2070/2080 is.
IN theory if there are different density GDDr6s then the vram capacity can also change. This might happen with the lower end 1650 cards with a 3GB option. If there was a double density chip available you could have a 2060 with 12GB of memory at a huge cost but without any more bandwidth so the extra memory would be largely useless.
I also expect that the existence of GDDR6 has actually made the case for 8GB weaker because 192bit bus is providing plenty of bandwidth, especially with nvidia's efficient design. Next generation will probably see the 3060 move to a 256bit interface and 8Gb vram.