Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Ah right. Haven't really been following his testing. It's good to know.8 Pack blew the low ROPs count theory out of the water when he benched a Fiji card with the memory running over 1000mhz and getting good gains from doing it. This highlights the fact that HBM1 running @500mhz is way too slow.
8 Pack blew the low ROPs count theory out of the water when he benched a Fiji card with the memory running over 1000mhz and getting good gains from doing it. This highlights the fact that HBM1 running @500mhz is way too slow.
Something that none of us can?more importantly what did he do to get 100% over clock ?
Yea...LN2 is great for quick bench run, but they are not actually consumer level to be use for everyday gaming...and also I'm not sure how many people will be comfortable gaming near LN2lol very true must have put it under ln2
Yea skinny GPU=GDDR5X, and full fat GPU=HBM2 looks likely.Can def see Nvidia doing this, maybe X80, X80 Ti and Titan Pascal will be all be based on HBM 2.0. While everything below will use GDDR5X.
8 Pack blew the low ROPs count theory out of the water when he benched a Fiji card with the memory running over 1000mhz and getting good gains from doing it. This highlights the fact that HBM1 running @500mhz is way too slow.
Surely the chips would need to have different memory controllers built in, so they would have been designed like this from the start and yet this is the first rumour we hear about it. I calling billyhooks on the whole thing. HBM across the board from both camps for next gen I reckon.
Disclaimer: I reserve the right to be completely wrong about this..
Makes sense for the mid/low cards to come with GDDR5. Why add all that expense onto those range of cards?
Maybe the decision has come about due to availability. If GDDR5X is more plentiful than HBM2 then Nvidia will take the decision to go with whatever will allow them to maximise sales.
Better improvement with HBM2 yet low availability or a smaller improvement with GDDR5x but high availability. Nvidia always want to shift high volume so if GDDR5x allows this then they will go for it. Still will be an improvement over GDDR5.
Dont blame them. AMD should think the same then they may see sales improve in 2016. Know thy enemy and all that.
Can't wait for the high end to have low availability which leads to scorching levels of price gouging and more experiments on new pricing levels. Just hope HBM isn't the ideal excuse some companies would like to put prices ever higher.Yeah if the availability and expense of adding HBM 2.0 to non top end cards is going to slow things down, then GDRR5X would absolutely makes sense, and leave HBM 2.0 for the higher end cards. That would mean more availability across the whole range until HBM matures and becomes readily available and less expensive. Either way, not really bothered if Nvidia or AMD do something like this. Anything to speed up progress and as long as solid availability of the higher end stuff it's all good.