CF works much better than SLI these days, with far more games supporting it.Crossfire and "works well", "oh my"![]()
Look at the list of 2016 SLI games and the one for CF. 3 game for SLI, a whooping 20 games for CF. That includes all the main AAA titles
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
CF works much better than SLI these days, with far more games supporting it.Crossfire and "works well", "oh my"![]()
But with mining you don't get the constant heat up / cool down cycles which physically stresses components..
Can you quantify the amount that power/heat cycling reduces the lifespan of the product?But with mining you don't get the constant heat up / cool down cycles which physically stresses components..

How come the Vega memory frequency is so low compared to the 1080's?
How come the Vega memory frequency is so low compared to the 1080's?
Nobody should jump on any train until reviews are in, unless they have money to gamble and lose.What I see is a computing card running with the Fiji to Vega modded drivers at 1440Mhz speed , providing the same performance as the GTX1080 @ 1785 with it's optimised gaming drivers......
In addition, the CPU per CORE overhead is way less. Look at how some cores are running at 90% on the GTX1080 while same scene same fps, not a single one goes over 50% on Vega.
Many of the scenes the 7700K @ 4.5 is running at 17% overall on the Vega GPU compared to 30% on the GTX1080/1080Ti (with many cores hitting even 90%). In addition it consumes 1GB less VRAM.
I would say, jump on the pre-order train before miners grab them all. Especially Ryzen owners going to see a lot of performance increase![]()
Because it's HBM2 memory. It has a super-wide bus (2048 bits) and runs at a low clock.
GDDR runs at higher clocks but has a much smaller bus (typically 192 to 384 bits)
Different memory technologies and ways of measuring the frequency.


Would be indeed stupid for AMD to deliberately slow Vega's performance on Intel platform considering big majority of gaming PCs have Intel CPU and majority of reviewers will no doubt use Intel CPU.There's also zero evidence that Vega will work better with Ryzen at this time.
HBM isn't exactly new.D'Oh! Completely forgot it uses the new memory![]()
Would be indeed stupid for AMD to deliberately slow Vega's performance on Intel platform considering big majority of gaming PCs have Intel CPU and majority of reviewers will no doubt use Intel CPU.
HBM isn't exactly new.
HBM1 just didn't become common because of small (1GB) capacity per stack...
And now getting mass produced HBM2 out hasn't proceeded at that good pace.
http://www.anandtech.com/show/11690/sk-hynix-customers-willing-to-pay-more-for-hbm2-memory
Anyone who's ever done mining and gaming and has a vague understanding of graphics cards. Heavy compute work is as near to synthetic load as you can get and outstrips pretty much anything any game will do. Power usage and temperature while mining will be higher than gaming, unless you heavily down clock and downvolt, which is basically throwing money away (you want to reduce power use but not at the expense of performance).
I posted this earlier but it seems to be more relevant to the current topic of conversation.
You forgot power efficiency. By removing transistors for hardware schedulers you gain in power efficiency.This is the thing I see. Nvidia basically use CPU power to do what they didn't want to do on the GPU. They saved themselves time and money and cost of NOT putting the scheduler on the card. I.e. they cheaped out, but gained fps by doing it. It feels like a dirty cheat, but the dirty cheat worked, so who can really complain.
.This is the thing I see. Nvidia basically use CPU power to do what they didn't want to do on the GPU. They saved themselves time and money and cost of NOT putting the scheduler on the card. I.e. they cheaped out, but gained fps by doing it. It feels like a dirty cheat, but the dirty cheat worked, so who can really complain.
Correct, not relevant to what was being discussed, but correctBut with mining you don't get the constant heat up / cool down cycles which physically stresses components..

If you're actually using some type of card that doesn't take any hashrate hit from lowering performance that that's cool, but it's an anomaly and by far the exception to the rule.I mine and they're nowhere near as stressed as gaming
Was reported a few pages bk.
Worth noting the last ARES card ASUS made (ARES III) came with an EK waterblock as standard (it's predecessor had used a dual GPU AIO).
This is the thing I see. Nvidia basically use CPU power to do what they didn't want to do on the GPU. They saved themselves time and money and cost of NOT putting the scheduler on the card. I.e. they cheaped out, but gained fps by doing it. It feels like a dirty cheat, but the dirty cheat worked, so who can really complain.
If you're actually using some type of card that doesn't take any hashrate hit from lowering performance that that's cool, but it's an anomaly and by far the exception to the rule.
So you're putting a 24/7 overclock on the memory.I mine and they're nowhere near as stressed as gaming, and I'm not throwing money away because the hash rate is the same at 55-65% TDP as it is at 120% TDP. All that matters is the memory overclock.
Like I said, 100w per card, 60% TDP, 1300-1400mhz core clock vs 120% and 2000-2100 core, they're barely warm at between 52-60c temp wise compared with 80c for gaming. Some serious mining myths here.