Was just wondering about VRAM and the focus tends to be on the amount of VRAM being priority. But can a lesser amount of VRAM be ok if the memory interface width and bandwidth is higher?
For example, 8GB VRAM with a 256-bit interface and 448.0GB/s bandwidth vs 12GB VRAM with a 192-bit interface and 360.0GB/s bandwidth. Which would perform better, does application matter etc?
It's entirely dependent on the GPU it's paired with and the applications demands of the card. Memory amount and speed is normally selected so that it does not bottleneck the GPU in most games, but is also no larger than generally useful to keep prices competitive. (More vRAM on the card, the more expensive it is to produce)
Sometime an application like some kind of video editing, CADCAM, or AI generation needs a lot more VRAM, but specialist users of these tools tend to know what they need and if necessary can buy high vRAM cards targeted at them, like RTX xx90's, or previously Titans and before that Quadros. (and all the AMD equivalents)
Keep in mind that for performance the primary unit we care about is the GPU itself, it takes GPU cycles to draw your next frame. Memory is just there for it to read data in and out of, so there's no point in supplying a slow GPU with more bandwidth than it can use, it's a waste.
The only time this choice has ever really mattered for gamers is when dual variants of the same card were available, with more or less vRAM. That typically happens when the lower value isn't quite enough in all cases, but the next biggest value is too much. The choice (and price) is passed to the consumer, this is quite rare though. The vast majority of cards have vRAM allocation that's appropriate for their GPU that covers the kind of games it will render in its useful lifetime.