Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Well aside from the fact that the bigger the memory bus the generally slower it will be due to power concerns(slower in as much as max clocks on the memory you'll hit), the best 512bit bus to date has produced what, 320GB/s, that isn't close, in any realistic way, to 512GB/s that should be easy to achieve, let alone a 640GB/s that is being rumoured(likely not correctly IMHO).
The trouble is that 512GB/s of bandwidth under HBM will use less power than the 320GB/s 4GB Hawaii provides, and certainly less than if you want to try and push clocks on the memory up further. At very best we're probably looking at 400GB/s with a significant increase in clocks, but even higher power usage to go with it.
That is where HBM wins, for any amount of bandwidth GDDR5 can provide, HBM can do it in 30% of the power. 512GB/s has ALWAYS been achievable with ggdr5, it would just take a likely 768bit bus, or 512bit bus with insane memory speeds, and it would probably use up 100-125W of the power on the card... leaving all over 125-150W realistically for the gpu itself. HBM can provide that same bandwidth in 30-40W, which would in the same situation leaving 210-220W for the gpu inside the same 250W gpu power budget.
GDDR5 is completely and utterly uncompetitive. If AMD or Nvidia produced a HBM and a GDDR5 version of their latest gen 250W cards, the HBM would spank the GDDR5 card silly because you could up the gpu clocks 40-50% or increase the shader/rop/tmu count by 40-50% with the extra power savings provided by HBM.
Sure this is a last gasp and end of the road for DDR5 no one is saying anything different but its not as obsolete just yet as some people make out.
Given the bandwidth is rumoured to be twice that of a titan black I think the 390/X might be the first single card to handle 4k at an acceptable level.
What we all need and want to know is, cant it run crysis?
If any of the current rumors regarding new cards from Nvidia and AMD have any shred of truth, alongside DX12 im wondering if i should just keep my 290 for now
DX12 will give the 290 a new lease of life in games that are coded for DX12
The 380X if it is a rebadged faster 290 does not appeal to me at all
If AMD hold off the 390X til late in the year, then having owned a 290 since release, i can happily wait a few more months AFTER the 390x and see what Nvidia come out with the Pascal (yes i know this is 2016, i can wait).
So basically i think im pinning my hopes on 390X or 395X2 being a superbeast, especially if DX12 gives the perrformance i think it might on my 290 for games designed with it, otherwise its going to be Nvidia newest tech. Tbh im at the point where i may just wait for the next die shrink to bother upgrading?
Lots of questions with no real answers i guess until we see a) AMD new batch of cards specs b) DX12 in realworld use and i guess maybe even c) something new from Nvidia?
If any of the current rumors regarding new cards from Nvidia and AMD have any shred of truth, alongside DX12 im wondering if i should just keep my 290 for now
DX12 will give the 290 a new lease of life in games that are coded for DX12
The 380X if it is a rebadged faster 290 does not appeal to me at all
If AMD hold off the 390X til late in the year, then having owned a 290 since release, i can happily wait a few more months AFTER the 390x and see what Nvidia come out with the Pascal (yes i know this is 2016, i can wait).
So basically i think im pinning my hopes on 390X or 395X2 being a superbeast, especially if DX12 gives the perrformance i think it might on my 290 for games designed with it, otherwise its going to be Nvidia newest tech. Tbh im at the point where i may just wait for the next die shrink to bother upgrading?
Lots of questions with no real answers i guess until we see a) AMD new batch of cards specs b) DX12 in realworld use and i guess maybe even c) something new from Nvidia?
Putting wild speculation aside for a sec, we need to see what actually shows up in retail and how they perform in legit reviews. We can't condemn a product that hasn't yet been released. By all means condemn the GTX 970 and R9 285 they are released and are pants etc (For different reasons), but at least give AMD a chance to release this new HBM card first before putting the boot in.
For your typical 40nm GDDR5 yes - they are now moving to a new revision on 20nm that uses a lot less power and runs higher frequencies - IIRC albeit in lab conditions they were hitting over 700GB/s in 512bit configuration with hand picked (overclocked) modules - obviously won't see that on retail GPUs.
Sure this is a last gasp and end of the road for DDR5 no one is saying anything different but its not as obsolete just yet as some people make out.
Putting wild speculation aside for a sec, we need to see what actually shows up in retail and how they perform in legit reviews. We can't condemn a product that hasn't yet been released. By all means condemn the GTX 970 and R9 285 they are released and are pants etc (For different reasons), but at least give AMD a chance to release this new HBM card first before putting the boot in.
I don't understand that 3D picture. How can that ever work properly, unless memory modules make good heatsinks nowadays?
Something that people are forgetting, is that HBM and interposed memory has reduced latency, which improves performance by reducing idle ticks on the xPU the memory is connected with.
But as well as the bandwidth increase, which just increases the amount of data that can be transferred per transfer, as well as the latency decrease which decreases the time it takes for the transfer to occur. HBM memory can also perform parallel read-writes to each stack, further reducing latency per chip.
HBM was more than just a bandwidth increase and will give a larger performance improvement than just a pure bandwidth increase.
Really naive question here. If your VRAM has twice the bandwidth is it functionally similar to having twice the VRAM? Or is the amount of VRAM still going to be a limiting factor no matter how high the bandwidth is?
Imagine VRam is water in a tank, the bandwidth is the size of the pipe used to drain water out on the tank and fill it back up.
The bigger the pipe the quicker it can be filled and emptied.
EDIT: 320GB/s is what Hawaii has, again in general the bigger the bus the slower the memory for again, power reasons. What can be achieved in a lab with the right chip(I would presume a very simple test chip designed to maximise bandwidth, nothing like a real world usage). Yes Hawaii could use 8Ghz clocks but you'd find the power would be insane. Again the power usage, the bulk of it, is in the signalling, not the chip. More chips/wider bus but slower signals use less power, the chips themselves will also be more power efficient at 5Ghz than at 8Ghz.