Caporegime
- Joined
- 18 Oct 2002
- Posts
- 33,188
bandwidth on a card has precisely nothing to do with pci-e bandwidth. Current HAwaii bandwidth is 300GB/s +, 1 stack of HBM gives 128GB/s of bandwidth and afaik the first round of HBM will be 1GB per stack, so 4GB HBM = 512GB/s bandwidth.
A pci-e 3 16x slot provides around 16GB/s of bandwidth. The reason to have memory on the gpu is so you can load data over there as fast as pci-e allows at the start of a level and then limit pci-e traffic to mostly just telling the gpu what it needs to do with the data it has rather than providing new data.
On the eco angle of everything, reducing idle power usage is fantastic, because wasting power when you aren't using the device using it is literally wasteful. When you're using it, how much power it uses is pretty much irrelevant. You pick and chose how much power you want to use based off the device you want. But 90-95% of new chips, be it mobile soc's, gpu's, cpu's, by design they are generally more power efficient than the previous chip.
A pci-e 3 16x slot provides around 16GB/s of bandwidth. The reason to have memory on the gpu is so you can load data over there as fast as pci-e allows at the start of a level and then limit pci-e traffic to mostly just telling the gpu what it needs to do with the data it has rather than providing new data.
On the eco angle of everything, reducing idle power usage is fantastic, because wasting power when you aren't using the device using it is literally wasteful. When you're using it, how much power it uses is pretty much irrelevant. You pick and chose how much power you want to use based off the device you want. But 90-95% of new chips, be it mobile soc's, gpu's, cpu's, by design they are generally more power efficient than the previous chip.
Last edited: