• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
That bit makes me laugh, as we can see that is exactly what they have done with Vega, launched a cheaper competitor within a few weeks to worry NVidia. :D:p:D
64 weeks to be precise :D :D

Oh and it won't be cheaper :p Apart from that, the "article" is obviously spot on :p :p :p
 
Yeah but on the other hand someone needs to push the boundaries, however they do have a habit of stuffing their products full of features that are questionable at best. True audio anyone ?


The last "hit" they had with a gpu feature was eyefinity, that got a decent uptake and forced nvidia's hand into making surround. Since then they've tried true-audio and tressfx, both of which have only appeared in a couple of amd sponsored titles iirc.
 
TROLOLOLOLOLOLOL.

"They could crush nVidia with this card they've just made, but are deliberately running it at 50% output and 100% power, because they don't want to pull ahead of nVidia."

LOLOLOLOLOL.

Nope.jpg

Too funny :D

3ybhfD0.gif
 
The last "hit" they had with a gpu feature was eyefinity, that got a decent uptake and forced nvidia's hand into making surround. Since then they've tried true-audio and tressfx, both of which have only appeared in a couple of amd sponsored titles iirc.

Tressfx has been used in a few recent Titles like Rise of the Tomb Raider and Deus Ex Mankind Divided. It's not called Tressfx in these games though it's pure hair or something like that. I think what happens now is developers can use Gpu Open which has TressFx in there and they can improve upon it or use it as is.
 
Tressfx has been used in a few recent Titles like Rise of the Tomb Raider and Deus Ex Mankind Divided. It's not called Tressfx in these games though it's pure hair or something like that. I think what happens now is developers can use Gpu Open which has TressFx in there and they can improve upon it or use it as is.

DX:MD uses PhysX/HairWorks components I think despite the earlier marketing - or at least all the support files are there for it - check the binary folder.

The last "hit" they had with a gpu feature was eyefinity, that got a decent uptake and forced nvidia's hand into making surround. Since then they've tried true-audio and tressfx, both of which have only appeared in a couple of amd sponsored titles iirc.

nVidia already had surround it got renamed mosaic and removed from the GeForce driver until AMD crashed the party forcing them to put it back.
 
nVidia already had surround it got renamed mosaic and removed from the GeForce driver until AMD crashed the party forcing them to put it back.

I believe Mosaic is still only for quadro cards. The geforce cards have surround, which is a more limited and less customisable version.
 
I believe Mosaic is still only for quadro cards. The geforce cards have surround, which is a more limited and less customisable version.

Yup most of the advanced features are locked down to Quadro. I can't remember what it was called now originally - not surround and only certain cards supported 3 monitor configurations most only did span mode across 2 and getting a 3rd display working via an additional GPU was hit and miss and to get any real features you had to use a hacked version of nView or something. I'll give AMD that they've often forced nVidia to stop dicking customers around and unlock things that should just be there by default.
 
Interesting article on Reddit, summarising a YouTube video on why we haven't seen the full performance of Vega yet:

https://www.reddit.com/r/Amd/comments/6rm3vy/vega_is_better_than_you_think/

https://www.youtube.com/watch?v=-G1nOztqWm0

Copy of the reddit post that was copied from the YouTube video:

VEGA 10 (GCN 5.0) Architecture is at present being judged by the Frontier Edition (Workstation / PRO) Drivers, and while it does have (Consumer / RX) Drivers included with the ability to switch between the two... currently neither of the VEGA 10 Drivers actually support the VEGA 10 Features beyond HBCC.

Yes, the Workstation Drivers do support FP16 / FP32 / FP64, as opposed to the Consumer Drivers that support only FP32 (Native) and FP16 via Atomics. Atomics allows a Feature to be used that is Supported but you're still Restricted by Driver Implementation as opposed to Direct GPU Optimisation.

FP16 Atomics does not provide the same leverage for Optimisation as a Native FP16 Pipeline. Essentially we're talking the difference Vs. FP32 Pipeline of +20% Vs. +60% Performance.

Now it should still be noted that, we're not seeing a +100% Performance; because... The Asynchronous Compute Engines (ACE) are still limited to 4 Pipelines and only support Packed Math Formats, which requires a slightly larger and more complex ACE than an FP32 version... thus you're not strictly getting 8x FP16 or 4x FP32 as in Legitimate Threads, but instead the Packing and Unpacking of the Results is occurring via the CPU (Drivers), so you have added Latency and what can be best described as "Software Threading"

So yeah you're looking ~40% Performance compared to a pure Hardware Solution, still this is within the same region of performance improvement that NVIDIA achieve through Giga-Threading. Which is almost literally Hyper-Threading for CUDA.

And such will see marginal benefits (up to 30%) in Non-Predictive Branches (i.e. Games) and 60% in Predictive Branches (i.e. Deep Learning, Rendering, Mining, etc.)

As this is entirely Software Handled, assuming support for Packed Math within the ACE... this is why we're seeing the RX VEGA Frontier Edition is essentially on-par with GCN 3.0 IPC if it were capable of being Overclocked to the same Clock Speeds. So, eh... this provides Decent Performance but keep in mind, essentially what we're seeing is what VEGA is capable of on FIJI (GCN 3.0) Drivers.

In short... what is happening is the Drivers are acting as a Limiter, in essence you have a Bugatti Veyron in "Road" Mode; where it just ends up a more pleasant drive overall... but that's a W12 under-the-hood. It can do better than the 150MPH that it's currently limiting you to. The question here ends up being, "Well just how much of a difference will Drivers make?" ... Conservatively speaking, the RX VEGA Consumer Drivers are almost certainly going to provide 20 - 35% Performance Uplift over what the Frontier Edition has showcased on FIJI Drivers.

Yet most of that optimisation will come from FP16 Support, Tile-Based Rendering, Geometry Discard Pipeline, etc. while HBCC will continue to ensure that the GPU isn't starved for Data maintaining very respectable Minimums that are almost certainly making NVIDIA start to feel quite nervous.

Still, this isn't the "Party Trick" of the VEGA Architecture. Something that most never really noticed was AMDs claim when they revealed Features of Vega.

Primarily that it supports 2X Thread Throughput. This might seem minor, but I'm not sure people quite grasped (NVIDIA did, because they got the GTX 1080 Ti and Titan Xp out to market ASAP following the official announcement of said features) is this actually is perhaps THE most remarkable aspect of the Architecture. So... what does this mean?

In essence the ACE on GCN 1.0 to 4.0 has 4 Pipelines, each is 128-Bit Wide. This means it processes 64-Bit on the Rising Edge, and 64-Bit on the Falling Edge of a Clock Cycle. Now each CU (64 Stream Processors) is actually 16 SIMD (Single Instruction, Multiple Data / Arithmetic Logic Units) each SIMD Supports a Single 128-Bit Vector (4x 32-Bit Components, i.e. [X, Y, Z, W]) and because you can process each individual Component ... this is why it's denoted as 64 "Stream" Processors, because 4x16 = 64.

As I note, the ACE has 4 Pipelines that Process, 4x128-Bit Threads Per Clock. The Minimum Operation Time is 4 Clocks ... as such 4x4 = 16x 128-Bit Asynchronous Operations Per Clock (or 64x 32-Bit Operations Per Clock)

GCN 5.0 still has the same 4 Pipelines, but each is now 256-Bit Wide. This means it processes 128-Bit on the Riding Edge, and 128-Bit on the Falling Edge. Each CU is also now 16 SIMD that support a Single 256-Bit Vector or Double 128-Bit Vector or Quad 64-Bit Vector (4x 64-Bit, 8x 32-Bit, 16x 16-Bit).

It does remain the same SIMD merely the Functionality is expanded to support Multiple Width Registers, in a very similar approach to AMD64 SIMD on their CPU; which believe it or not, AMD SIMD (SSE) is FASTER than Intel because of their approach. This is why Intel kept introducing new Slightly Incompatible versions of SSE / AVX / etc. They're literally doing it to screw over AMD Hardware being better by using their Market Dominance to force a Standard that deliberately slows down AMD Performance, hence why Bulldozer Architecture appeared to be somewhat less capable in a myriad of common scenarios.

Anyway, what this means is Vega remains 100% Compatible and can be run as if it were a current Generation GCN Architecture. So all of the Stability, Performance Improvements, etc. they should translated pretty well and it will act in essence like a 64CU Polaris / Fiji at 1600MHz; and well that's what we see in the Frontier Edition Benchmarks.

Now a downside of this, is well it's still strictly speaking using the "Entire" GPU to do this... so the power utilisation numbers appear curiously High for the performance it's providing; but remember is being used as if under 100% Load; while in reality it's Utilisation is actually 50%. Here's where it begins to make sense as to why when they originally began showing RX VEGA at Trade Conventions, they were using it in a Crossfire Combination; as it is a Subtle (to anyone paying attention, again like NVIDIA) hint at when fully Optimised the Ballpark of what a SINGLE RX VEGA will be capable of under a Native Driver.

And well... it's performance is frankly staggering as it was running Battlefield 1, Battlefront, Doom and Sniper Elite 4 at UHD 5K at 95 FPS+ For those somewhat less versed in the processing Power Required here.

The Titan Xp, is capable of UHD 4K on those games at about 120 FPS, if you were to increase it to UHD 5K it would drop to 52 FPS; and at this point it's perhaps dawning on those reading this why NVIDIA have somewhat entered "Full Alert Mode" ... because Volta was aimed at ~20% Performance Improvement, and this was being achieved primarily via just making a larger GPU with more CUDA Cores.

RX VEGA has the potential to dwarf this in it's current state. Still this also begins to bring up the question... "If AMD have that much performance just going to waste... Why aren't they using it to Crush NVIDIA? Give them a Taste of their own Medicine!" Simple... they don't need to, and it's actually not advantageous for them to do so. While doing this might give them the Top-Dog Spot for the next 12-18 months... NVIDIA aren't idiots, and they'll find a way to become competitive; either Legitimate, or via utilising their current Market Share.

And people will somewhat accept them doing this to "Be Competitive", but if AMD aren't being overly aggressive and letting NVIDIA remain in their Dominant Position; while offering value and slowly removing NVIDIA from the Mainstream / Entry Level... well then not only do they know that they can with each successive "Re-Brand" Lower Costs, Improve the Architecture and offer a Meaningful Performance Uplift for their Consumers while remaining Competitive with anything NVIDIA produce.

They can also (which they do appear to be doing) with Workstation GPUs appear to be offering better performance and value in those scenarios... again better than what NVIDIA can offer, and in said Arena NVIDIA don't have the same tools (i.e. Developer Support / GameWorks / etc.) to really do anything about this beyond throwing their toys out of the pram. As I note here, NVIDIA can't exactly respond without essentially appearing to be petty / vindictive and potential breaking Anti-Trust (Monopoly) Laws to really strike out against AMD essentially Sandbagging them.

With perhaps the worst part for NVIDIA here being, they can see it plain as bloody day what AMD are doing; but can't do anything about it. Knowing that regardless what they do, AMD can within a matter of weeks put together a next-generation launch (rebrand); push out new drivers that tweak performance and simply match it while undercutting the price by £20-50. Even at the same price, it will make NVIDIA look like it's loosing it's edge.

THAT is what Vega and Polaris have both been about for AMD, the same is true with Ryzen, Threadripper and Epyc. AMD aren't looking at a short term "Win" for a Generation... they're clearly seeking to destroy their competitors stranglehold on the Industry as a whole.​


Pmsl,best tripe
 
Pmsl,best tripe

There is some truth to some of that but I don't think people are going to see huge performance gains but rather power saving.

For instance there are probably some bits where say a setup engine (A) is producing unoptimised results inefficiently at say 85% utilisation straight into the next stage (B) that is working at 100% to handle the results and then handing them off to another part (C) that is working at say 75% utilisation due to being bottlenecked by (B) - with certain features enabled or programmed for which currently aren't (A) is now producing more optimised results meaning (B) is only having to work at 50-60% to do the same workload reducing power and/or allowing (A) to work a little bit faster and consequentially better utilising the capabilities of (C). In this kind of context you'd get say 30-35% power saving at the same performance level or 15-20% power saving with a 10-15% performance uplift (just ballpark figures for example purposes).
 
You call it tripe.

Yet offer no information that goes against what the article is saying.

I would love it to be true...

A lot of it doesn't make much sense though - there is little reason for AMD to hold back now either way nVidia could potentially crush them but going on the offensive with that kind of speed would at least have nVidia on the back foot and longer before they could respond - and if they wanted to hold back it would have been better to make a more cost effective card than something that as a mid-range to upper mid-range performer very cost ineffective with the size of the core, HBM2, etc.

Also the stuff about Volta is BS - nVidia never set out to create a 20% iteration - Volta has been very much focused on other areas with GeForce a secondary focus and what they have done should have AMD worried not the other way around as the failings of 10nm has caused them to perfect making extremely large dies that normally wouldn't even be considered and they aren't needing to use that space with stuff relevant to gaming performance - so strip that out and they've got a lot of space to play with even at 12FF level never mind a shrink to 10 or 7nm and don't forget Volta was originally aimed at 10nm so a shrink wouldn't be so prohibitive they'd have no choice but to stick with 12FF.
 
Interesting article on Reddit, summarising a YouTube video on why we haven't seen the full performance of Vega yet:

https://www.reddit.com/r/Amd/comments/6rm3vy/vega_is_better_than_you_think/

https://www.youtube.com/watch?v=-G1nOztqWm0

Copy of the reddit post that was copied from the YouTube video:

VEGA 10 (GCN 5.0) Architecture is at present being judged by the Frontier Edition (Workstation / PRO) Drivers, and while it does have (Consumer / RX) Drivers included with the ability to switch between the two... currently neither of the VEGA 10 Drivers actually support the VEGA 10 Features beyond HBCC.

Yes, the Workstation Drivers do support FP16 / FP32 / FP64, as opposed to the Consumer Drivers that support only FP32 (Native) and FP16 via Atomics. Atomics allows a Feature to be used that is Supported but you're still Restricted by Driver Implementation as opposed to Direct GPU Optimisation.

FP16 Atomics does not provide the same leverage for Optimisation as a Native FP16 Pipeline. Essentially we're talking the difference Vs. FP32 Pipeline of +20% Vs. +60% Performance.

Now it should still be noted that, we're not seeing a +100% Performance; because... The Asynchronous Compute Engines (ACE) are still limited to 4 Pipelines and only support Packed Math Formats, which requires a slightly larger and more complex ACE than an FP32 version... thus you're not strictly getting 8x FP16 or 4x FP32 as in Legitimate Threads, but instead the Packing and Unpacking of the Results is occurring via the CPU (Drivers), so you have added Latency and what can be best described as "Software Threading"

So yeah you're looking ~40% Performance compared to a pure Hardware Solution, still this is within the same region of performance improvement that NVIDIA achieve through Giga-Threading. Which is almost literally Hyper-Threading for CUDA.

And such will see marginal benefits (up to 30%) in Non-Predictive Branches (i.e. Games) and 60% in Predictive Branches (i.e. Deep Learning, Rendering, Mining, etc.)

As this is entirely Software Handled, assuming support for Packed Math within the ACE... this is why we're seeing the RX VEGA Frontier Edition is essentially on-par with GCN 3.0 IPC if it were capable of being Overclocked to the same Clock Speeds. So, eh... this provides Decent Performance but keep in mind, essentially what we're seeing is what VEGA is capable of on FIJI (GCN 3.0) Drivers.

In short... what is happening is the Drivers are acting as a Limiter, in essence you have a Bugatti Veyron in "Road" Mode; where it just ends up a more pleasant drive overall... but that's a W12 under-the-hood. It can do better than the 150MPH that it's currently limiting you to. The question here ends up being, "Well just how much of a difference will Drivers make?" ... Conservatively speaking, the RX VEGA Consumer Drivers are almost certainly going to provide 20 - 35% Performance Uplift over what the Frontier Edition has showcased on FIJI Drivers.

Yet most of that optimisation will come from FP16 Support, Tile-Based Rendering, Geometry Discard Pipeline, etc. while HBCC will continue to ensure that the GPU isn't starved for Data maintaining very respectable Minimums that are almost certainly making NVIDIA start to feel quite nervous.

Still, this isn't the "Party Trick" of the VEGA Architecture. Something that most never really noticed was AMDs claim when they revealed Features of Vega.

Primarily that it supports 2X Thread Throughput. This might seem minor, but I'm not sure people quite grasped (NVIDIA did, because they got the GTX 1080 Ti and Titan Xp out to market ASAP following the official announcement of said features) is this actually is perhaps THE most remarkable aspect of the Architecture. So... what does this mean?

In essence the ACE on GCN 1.0 to 4.0 has 4 Pipelines, each is 128-Bit Wide. This means it processes 64-Bit on the Rising Edge, and 64-Bit on the Falling Edge of a Clock Cycle. Now each CU (64 Stream Processors) is actually 16 SIMD (Single Instruction, Multiple Data / Arithmetic Logic Units) each SIMD Supports a Single 128-Bit Vector (4x 32-Bit Components, i.e. [X, Y, Z, W]) and because you can process each individual Component ... this is why it's denoted as 64 "Stream" Processors, because 4x16 = 64.

As I note, the ACE has 4 Pipelines that Process, 4x128-Bit Threads Per Clock. The Minimum Operation Time is 4 Clocks ... as such 4x4 = 16x 128-Bit Asynchronous Operations Per Clock (or 64x 32-Bit Operations Per Clock)

GCN 5.0 still has the same 4 Pipelines, but each is now 256-Bit Wide. This means it processes 128-Bit on the Riding Edge, and 128-Bit on the Falling Edge. Each CU is also now 16 SIMD that support a Single 256-Bit Vector or Double 128-Bit Vector or Quad 64-Bit Vector (4x 64-Bit, 8x 32-Bit, 16x 16-Bit).

It does remain the same SIMD merely the Functionality is expanded to support Multiple Width Registers, in a very similar approach to AMD64 SIMD on their CPU; which believe it or not, AMD SIMD (SSE) is FASTER than Intel because of their approach. This is why Intel kept introducing new Slightly Incompatible versions of SSE / AVX / etc. They're literally doing it to screw over AMD Hardware being better by using their Market Dominance to force a Standard that deliberately slows down AMD Performance, hence why Bulldozer Architecture appeared to be somewhat less capable in a myriad of common scenarios.

Anyway, what this means is Vega remains 100% Compatible and can be run as if it were a current Generation GCN Architecture. So all of the Stability, Performance Improvements, etc. they should translated pretty well and it will act in essence like a 64CU Polaris / Fiji at 1600MHz; and well that's what we see in the Frontier Edition Benchmarks.

Now a downside of this, is well it's still strictly speaking using the "Entire" GPU to do this... so the power utilisation numbers appear curiously High for the performance it's providing; but remember is being used as if under 100% Load; while in reality it's Utilisation is actually 50%. Here's where it begins to make sense as to why when they originally began showing RX VEGA at Trade Conventions, they were using it in a Crossfire Combination; as it is a Subtle (to anyone paying attention, again like NVIDIA) hint at when fully Optimised the Ballpark of what a SINGLE RX VEGA will be capable of under a Native Driver.

And well... it's performance is frankly staggering as it was running Battlefield 1, Battlefront, Doom and Sniper Elite 4 at UHD 5K at 95 FPS+ For those somewhat less versed in the processing Power Required here.

The Titan Xp, is capable of UHD 4K on those games at about 120 FPS, if you were to increase it to UHD 5K it would drop to 52 FPS; and at this point it's perhaps dawning on those reading this why NVIDIA have somewhat entered "Full Alert Mode" ... because Volta was aimed at ~20% Performance Improvement, and this was being achieved primarily via just making a larger GPU with more CUDA Cores.

RX VEGA has the potential to dwarf this in it's current state. Still this also begins to bring up the question... "If AMD have that much performance just going to waste... Why aren't they using it to Crush NVIDIA? Give them a Taste of their own Medicine!" Simple... they don't need to, and it's actually not advantageous for them to do so. While doing this might give them the Top-Dog Spot for the next 12-18 months... NVIDIA aren't idiots, and they'll find a way to become competitive; either Legitimate, or via utilising their current Market Share.

And people will somewhat accept them doing this to "Be Competitive", but if AMD aren't being overly aggressive and letting NVIDIA remain in their Dominant Position; while offering value and slowly removing NVIDIA from the Mainstream / Entry Level... well then not only do they know that they can with each successive "Re-Brand" Lower Costs, Improve the Architecture and offer a Meaningful Performance Uplift for their Consumers while remaining Competitive with anything NVIDIA produce.

They can also (which they do appear to be doing) with Workstation GPUs appear to be offering better performance and value in those scenarios... again better than what NVIDIA can offer, and in said Arena NVIDIA don't have the same tools (i.e. Developer Support / GameWorks / etc.) to really do anything about this beyond throwing their toys out of the pram. As I note here, NVIDIA can't exactly respond without essentially appearing to be petty / vindictive and potential breaking Anti-Trust (Monopoly) Laws to really strike out against AMD essentially Sandbagging them.

With perhaps the worst part for NVIDIA here being, they can see it plain as bloody day what AMD are doing; but can't do anything about it. Knowing that regardless what they do, AMD can within a matter of weeks put together a next-generation launch (rebrand); push out new drivers that tweak performance and simply match it while undercutting the price by £20-50. Even at the same price, it will make NVIDIA look like it's loosing it's edge.

THAT is what Vega and Polaris have both been about for AMD, the same is true with Ryzen, Threadripper and Epyc. AMD aren't looking at a short term "Win" for a Generation... they're clearly seeking to destroy their competitors stranglehold on the Industry as a whole.​

I WANT TO BELIEVE! :D
 
looking at the NEW undercover power im like...

**** they will unlock Vega like they unlocked super power of Cell processor in PlayStation 3 :cool:
 
Interesting article on Reddit, summarising a YouTube video on why we haven't seen the full performance of Vega yet:

https://www.reddit.com/r/Amd/comments/6rm3vy/vega_is_better_than_you_think/

https://www.youtube.com/watch?v=-G1nOztqWm0

Copy of the reddit post that was copied from the YouTube video:

SNIP SNIP

He's vaguely right on a few things, though he seems to understand them poorly and explain them really badly .... but his idea that AMD are purposely sand bagging Vega performance to 50% of what it can be, in order to not upset NVIDIA, is utterly farcical, and makes no sense anyway, as Volta is a done deal, and if they beat that before it was released than NVIDIA would be up **** creek.

However, I do think drivers, perhaps firmware and definitely game patches can massively improve Vega's performance .....

People shouldn't forget that at launch, a 290X struggled to beat a 770 in a lot of games. That quickly changed to where a 290 non-X demolished the 780. Soon both 290 and 290X beat the 780Ti and Titan Black handsomely .... now, in a lot of games, they have double the frame rates.

AMD would have known that there was a huge amount of performance left on the table with Hawaii when it launched, but obviously couldn't promise it or price it that way because the projected performance was not yet a done deal ... not some ridiculous strategy of not riling NVIDIA.

I wouldn't expect quite such an incredible transformation here. But I would expect at the very least 20-30% gains *after* launch, over the course of the next year or so, in many games.
 
Don't, you will get let down again.

Actually, I never did.

I've been a bit less than popular around here for constantly nagging about how the HBCC and HBM are pro features not suitable for a gaming card, that the power envelope will be a concern, that DSBR may need special coding to make full use of and there will be no 'magic drivers', etc etc etc

People really hype themselves up and get let down, then come out with pitchforks for something they did to themselves.

Personally I think Vega is a really nice architecture. Sure, it's not optimised for gaming (and it can't be, otherwise they would've put 16GB of GDDR5X on it and dropped the HBCC, having 2 separate Vegas for gaming/pro markets) but alas AMD's budget means we can only have 1 arch and AMD chose to make a pro-focused architecture. That's fair enough. But it's also not to say though that Vega is a failure.

For one thing it has amazing compute and capabilities that you just can't find on Nvidia side. For another, the software has made huge strides on the professional front and there's finally an alternative to CUDA. Then we have it's use in Macs which will force Adobe's hand into making use of the new AMD software stack... The truth is, that outside the gaming world there's a lot of enthusiasm about Vega.

Now, on the gaming side I think Vega is stuck with some poor choices on the HBCC/HBM but there is hope that the DSBR and the 'packed math' will make for significant gains in new games. This will be helped by the fact that DX12/Vulkan have been picked up and their pace is accelerating. Sure, the power consumption will be bad, but the performance will get better.

The problem is that AMD needed to take a 'leapfrog' kind of step in efficiency and they didn't. For smaller chips like Polaris, the extra watt percentage is reasonable (I'd still take my RX480 over any 1060). But for bigger chips that % difference becomes significant.

Still, the RX Vega 56 looks like a sweet spot for me. Better than a 1070 due to its raw compute and with lots of extra features: more powerful async for Vulkan/DX12, full DX12 feature set, and DSBR/packed-math for new games. It's a no-brainer for me so I'm going to try and grab one. Obviously concerned about them miners though...
 
Status
Not open for further replies.
Back
Top Bottom