• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
Crossfire and "works well", "oh my" ;)
CF works much better than SLI these days, with far more games supporting it.

Look at the list of 2016 SLI games and the one for CF. 3 game for SLI, a whooping 20 games for CF. That includes all the main AAA titles
 
But with mining you don't get the constant heat up / cool down cycles which physically stresses components..

You think running a GPU at 100 percent load for months on end doesn't stress it? Especially when it's stuffed in amongst a load of other cards and wasn't designed for that kind of life.

Heat cycles are only one aspect of wear and tear on electronics.
 
But with mining you don't get the constant heat up / cool down cycles which physically stresses components..
Can you quantify the amount that power/heat cycling reduces the lifespan of the product?

From what I've read, constant heat will reduce a product's lifespan by a factor of 2 for every +10 degrees hotter it runs. Mining cards will be constantly 20-40 degrees (more?) hotter than gaming cards which will be idling most of the time.

That means the power/heat cycling would have to reduce longevity by a factor of 4 to be comparable.
 

What I see is a computing card running with the Fiji to Vega modded drivers at 1440Mhz speed , providing the same performance as the GTX1080 @ 1785 with it's optimised gaming drivers......
In addition, the CPU per CORE overhead is way less. Look at how some cores are running at 90% on the GTX1080 while same scene same fps, not a single one goes over 50% on Vega.

Many of the scenes the 7700K @ 4.5 is running at 17% overall on the Vega GPU compared to 30% on the GTX1080/1080Ti (with many cores hitting even 90%). In addition it consumes 1GB less VRAM.

I would say, jump on the pre-order train before miners grab them all. Especially Ryzen owners going to see a lot of performance increase :D
 
What I see is a computing card running with the Fiji to Vega modded drivers at 1440Mhz speed , providing the same performance as the GTX1080 @ 1785 with it's optimised gaming drivers......
In addition, the CPU per CORE overhead is way less. Look at how some cores are running at 90% on the GTX1080 while same scene same fps, not a single one goes over 50% on Vega.

Many of the scenes the 7700K @ 4.5 is running at 17% overall on the Vega GPU compared to 30% on the GTX1080/1080Ti (with many cores hitting even 90%). In addition it consumes 1GB less VRAM.

I would say, jump on the pre-order train before miners grab them all. Especially Ryzen owners going to see a lot of performance increase :D
Nobody should jump on any train until reviews are in, unless they have money to gamble and lose.

The idea that AMD are still sandbagging is getting more and more absurd by the day.

There's also zero evidence that Vega will work better with Ryzen at this time.
 
There's also zero evidence that Vega will work better with Ryzen at this time.
Would be indeed stupid for AMD to deliberately slow Vega's performance on Intel platform considering big majority of gaming PCs have Intel CPU and majority of reviewers will no doubt use Intel CPU.



D'Oh! Completely forgot it uses the new memory :rolleyes:;)
HBM isn't exactly new.
HBM1 just didn't become common because of small (1GB) capacity per stack...
And now getting mass produced HBM2 out hasn't proceeded at that good pace.
http://www.anandtech.com/show/11690/sk-hynix-customers-willing-to-pay-more-for-hbm2-memory
 
Would be indeed stupid for AMD to deliberately slow Vega's performance on Intel platform considering big majority of gaming PCs have Intel CPU and majority of reviewers will no doubt use Intel CPU.



HBM isn't exactly new.
HBM1 just didn't become common because of small (1GB) capacity per stack...
And now getting mass produced HBM2 out hasn't proceeded at that good pace.
http://www.anandtech.com/show/11690/sk-hynix-customers-willing-to-pay-more-for-hbm2-memory

I do not see the AMD slowing down the Intel performance with the Vega drivers. On the contrary I see the Nvidia drivers slowing down Ryzen & SkylakeX performance.
Something many are saying months now.
 
Last edited:
Anyone who's ever done mining and gaming and has a vague understanding of graphics cards. Heavy compute work is as near to synthetic load as you can get and outstrips pretty much anything any game will do. Power usage and temperature while mining will be higher than gaming, unless you heavily down clock and downvolt, which is basically throwing money away (you want to reduce power use but not at the expense of performance).

I mine and they're nowhere near as stressed as gaming, and I'm not throwing money away because the hash rate is the same at 55-65% TDP as it is at 120% TDP. All that matters is the memory overclock.

Like I said, 100w per card, 60% TDP, 1300-1400mhz core clock vs 120% and 2000-2100 core, they're barely warm at between 52-60c temp wise compared with 80c for gaming. Some serious mining myths here.
 
I posted this earlier but it seems to be more relevant to the current topic of conversation.

This is the thing I see. Nvidia basically use CPU power to do what they didn't want to do on the GPU. They saved themselves time and money and cost of NOT putting the scheduler on the card. I.e. they cheaped out, but gained fps by doing it. It feels like a dirty cheat, but the dirty cheat worked, so who can really complain.
 
This is the thing I see. Nvidia basically use CPU power to do what they didn't want to do on the GPU. They saved themselves time and money and cost of NOT putting the scheduler on the card. I.e. they cheaped out, but gained fps by doing it. It feels like a dirty cheat, but the dirty cheat worked, so who can really complain.
You forgot power efficiency. By removing transistors for hardware schedulers you gain in power efficiency.

I wouldn't say they cheaped out because now they need to spend time writing drivers to ensure that their cards are competitive. But it does mean that EOL cards aren't worth writing optimisation code for ;).
 
This is the thing I see. Nvidia basically use CPU power to do what they didn't want to do on the GPU. They saved themselves time and money and cost of NOT putting the scheduler on the card. I.e. they cheaped out, but gained fps by doing it. It feels like a dirty cheat, but the dirty cheat worked, so who can really complain.

Nah most of the hardware scheduling is still there - some bits have been moved to software "assistance" - what nVidia is doing is actually a lot more complicated and while somewhat more reliant on CPU it provides the same and potential for better GPU performance (as long as the CPU it upto it). There is a lot of misunderstanding about what nVidia is doing and people are quick to assume and criticise it.
 
But with mining you don't get the constant heat up / cool down cycles which physically stresses components..
Correct, not relevant to what was being discussed, but correct :p


I mine and they're nowhere near as stressed as gaming
If you're actually using some type of card that doesn't take any hashrate hit from lowering performance that that's cool, but it's an anomaly and by far the exception to the rule.
 
Was reported a few pages bk.



Worth noting the last ARES card ASUS made (ARES III) came with an EK waterblock as standard (it's predecessor had used a dual GPU AIO).
This is the thing I see. Nvidia basically use CPU power to do what they didn't want to do on the GPU. They saved themselves time and money and cost of NOT putting the scheduler on the card. I.e. they cheaped out, but gained fps by doing it. It feels like a dirty cheat, but the dirty cheat worked, so who can really complain.


They didn't cheap out and its not a dirty trick.

Nvidia used to have the entire scheduling pipeline in hardware with Fermi, but hardware based solutions lack a lot of flexibility and can be very complex. With Kepler they moved a small amount of that out to the driver stack, which made the hardware much more efficient. The task scheduling is still done by the GPU in hardware but the drivers can influence more control to better load the hardware
 
If you're actually using some type of card that doesn't take any hashrate hit from lowering performance that that's cool, but it's an anomaly and by far the exception to the rule.

Actually it's standard mining practice - despite the many pages of rhetoric from people who don't know about mining talking like experts on the matter.
 
I mine and they're nowhere near as stressed as gaming, and I'm not throwing money away because the hash rate is the same at 55-65% TDP as it is at 120% TDP. All that matters is the memory overclock.

Like I said, 100w per card, 60% TDP, 1300-1400mhz core clock vs 120% and 2000-2100 core, they're barely warm at between 52-60c temp wise compared with 80c for gaming. Some serious mining myths here.
So you're putting a 24/7 overclock on the memory.

Sure you might not be stressing the core, but you sure as hell are stressing the memory.

In the ex-mining card that I had, it was indeed the memory that failed, a few months after purchase. Anecdotal and not proof of anything, of course. But you can't argue that a 24/7 overclock of your memory is less stressful for that subsystem than periodic gaming.
 
Status
Not open for further replies.
Back
Top Bottom