Caporegime
- Joined
- 18 Oct 2002
- Posts
- 33,188
Firstly, cpus/gpus being fundamentally limited by memory performance is a fact, read up any book on cpu architecture to have it confirmed. Presuming that a single architecture doesn't gain a massive benefit from more memory speed is daft, the architecture itself was built around a memory bandwidth and latency limit. Intel know exactly what memory will be used and what platform the chip will be used in when it's being designed.
If Intel could provide 200GB/s at same or lower latency within the same power limit current memory uses then their architecture would be monumentally different to currently, it would be 100% unrecognisable.
So taking a current chip and saying IT doesn't need more bandwidth thus another chip with twice the bandwidth won't gain an advantage is, honestly, ignorant. If you paired quad channel DDR4 with a P4 3.8Ghz chip you would see no performance boost over ddr, yet if you paired ddr with a current i7 you'd see a truly dramatic drop in performance.
When you get told you can use 40GB/s bandwidth you design one chip, when you get told you can use 100GB/s you design a different chip, when you get told you can use 640GB/s you design a completely different chip again.
Comparing the bandwidth of a 390x to an existing card and saying it won't make any difference is stupid, because you could say the same about any current cards bandwidth from the view point of a card from a previous generation with half the bandwidth. Lets take for example the 4870, first gpu to use gddr4, why did it get such a performance boost over a 4850, because AMD knew that memory would be available and designed the architecture to use the higher bandwidth, as such the gddr3 version was a little bandwidth starved and the gddr5 version had enough to feed the beast. Lets not forget that the 4870 with significantly more memory bandwidth with significantly higher efficiency competed with Nvidia's best gpu, double the size, better than almost any other generation.... coincidence... not even slightly.
Again read any architecture book, memory performance is the fundamental limit to which every chip is designed around, particularly latency but bandwidth is also important.
Then we move on to as mupsmebeauty is saying, regardless of what you want to believe Kaap, HBM is drastically lower power than gddr5, and most of the power difference is in signalling rather than the chips themselves. Within the same power budget even 10W saved is 5% higher gpu core power you can use, if it's more like 40W then it's more like 20% more power the gpu core can use instead.
AS for bandwidth at a given resolution, the same frame rate at a given resolution will require more bandwidth.... but double the frame rate at 1080p and the gpu will still require more bandwidth to achieve it. More FPS at ANY resolution requires more bandwidth. The gpu may be only capable of feeding 30fps at 4k in a game or 120fps at 1080p in the same game, but bandwidth usage could be identical, it's accessing the gpus memory for every frame being produced, more frames = more memory access = more bandwidth required.
For higher performance regardless of resolution more bandwidth is required.
If Intel could provide 200GB/s at same or lower latency within the same power limit current memory uses then their architecture would be monumentally different to currently, it would be 100% unrecognisable.
So taking a current chip and saying IT doesn't need more bandwidth thus another chip with twice the bandwidth won't gain an advantage is, honestly, ignorant. If you paired quad channel DDR4 with a P4 3.8Ghz chip you would see no performance boost over ddr, yet if you paired ddr with a current i7 you'd see a truly dramatic drop in performance.
When you get told you can use 40GB/s bandwidth you design one chip, when you get told you can use 100GB/s you design a different chip, when you get told you can use 640GB/s you design a completely different chip again.
Comparing the bandwidth of a 390x to an existing card and saying it won't make any difference is stupid, because you could say the same about any current cards bandwidth from the view point of a card from a previous generation with half the bandwidth. Lets take for example the 4870, first gpu to use gddr4, why did it get such a performance boost over a 4850, because AMD knew that memory would be available and designed the architecture to use the higher bandwidth, as such the gddr3 version was a little bandwidth starved and the gddr5 version had enough to feed the beast. Lets not forget that the 4870 with significantly more memory bandwidth with significantly higher efficiency competed with Nvidia's best gpu, double the size, better than almost any other generation.... coincidence... not even slightly.
Again read any architecture book, memory performance is the fundamental limit to which every chip is designed around, particularly latency but bandwidth is also important.
Then we move on to as mupsmebeauty is saying, regardless of what you want to believe Kaap, HBM is drastically lower power than gddr5, and most of the power difference is in signalling rather than the chips themselves. Within the same power budget even 10W saved is 5% higher gpu core power you can use, if it's more like 40W then it's more like 20% more power the gpu core can use instead.
AS for bandwidth at a given resolution, the same frame rate at a given resolution will require more bandwidth.... but double the frame rate at 1080p and the gpu will still require more bandwidth to achieve it. More FPS at ANY resolution requires more bandwidth. The gpu may be only capable of feeding 30fps at 4k in a game or 120fps at 1080p in the same game, but bandwidth usage could be identical, it's accessing the gpus memory for every frame being produced, more frames = more memory access = more bandwidth required.
For higher performance regardless of resolution more bandwidth is required.
Last edited: