• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Working on At Least Three Radeon RX Vega SKUs, Slowest Faster than GTX 1070

Ditto,
Once Nvidia support Freesync I'll happily go green again but for now I'm AMD only.

That's not going to happen and I doubt AMD would take up G-sync if NV allowed them to. I know AMD made theirs an open standard but probably because they were not first to release the technology? If roles were reversed maybe they wouldn't have made it open and freeeeee. Is see the whole Freesync thing as good marketing, while then causing customers to have a dig at NV for not supporting it :). I do think we should now have monitors that support both though.


Regarding the rest of your message, if AMD is to survive and do well, at some point they need to get closer to NV and Intel pricing. So waiting and waiting for AMD to release something cheaper may soon not work anyway. When looking at pricing it may also help gauge when the competition might launch something. As Volta is getting closer I still reckon if AMD release a card that's near Ti performance it will need to be very competitively priced to grab the sale because Volta will take another leap ahead. If they both released similarly performing parts at the same time, they could and should be similarly priced.


I'm not sure what you meant by AMD's top end card staying top of the pile for longer? If they don't release something better then sure it will be top of the pile for a while, but if they release something faster it doesn't mean the last top end card is no longer relevant, or useful. Personally I think it's great the progress NV is making. I use a TXP too and even though the new TXPd is 8-10% faster and devalues my card I don't care, it's still a very, very fast card that will last me a while before needing to upgrade. Next time I upgrade it should also mean I can make a bigger leap.
 
With Volta around the corner, Vega isn't even relevant, and i always buy AMD
hard to say, the performance headroom left for nvidia in 16nm node is 20% performance uplift over TitanXp, 30% over Ti, if they release a 600mm² chip, but that is unlikely seeing how they just released these 2 cards a month ago, they would need at least another 6 months before they dethrone it with another card.
if volta is being released in 2H 2017 on 16nm instead of 10nm as they were planning, then it would replace low-mid range, from 1050 to 1080, where vega is supposed to be faster.
AMD might have skiped high end with polaris gen, that doesn't mean vega is going to be outdated, especialy if you look at it through performance metric as you seem to do.
 
With Volta around the corner, Vega isn't even relevant, and i always buy AMD

Vega is relevant in the same way Ryzen was relevant to the CPU market. It's important for AMD to be competitive so we consumers don't get shafted by ever increasing GPU prices.

If Vega comes out at 1080 performance at 1070 pricing, that's not a fail, yes they don't take the performance crown, but it puts pressure on nVidia to not price their comparable performance GPU's at such a high price.

It's not like it's never happened before either, the 4850/4870's appeared from nowhere at $300, with performance comparable to the $450 GTX260 and blew nVidia's pricing out of the water. nVidia was forced to slash prices, and if I remember rightly offered early buyers cashback to save face.

This is exactly what the GPU market needs now to shake up the status quo.
 
If AMD manage to come out with a card that will beat an NVidia card, they'd only need to undercut green team by £/$100 to be a sales success. Those thinking or hoping a 1080ti killer will be £400 are delusional. Such a card will be £550-600 easily.
 
hard to say, the performance headroom left for nvidia in 16nm node is 20% performance uplift over TitanXp, 30% over Ti, if they release a 600mm² chip, but that is unlikely seeing how they just released these 2 cards a month ago, they would need at least another 6 months before they dethrone it with another card.
if volta is being released in 2H 2017 on 16nm instead of 10nm as they were planning, then it would replace low-mid range, from 1050 to 1080, where vega is supposed to be faster.
AMD might have skiped high end with polaris gen, that doesn't mean vega is going to be outdated, especialy if you look at it through performance metric as you seem to do.

Didnt realise they had to go with 10nm over 16nm :( There goes the original planned graph that Volta was going to be 50% faster than Pascal. I was hoping for another 8800GTX card.
 
If AMD manage to come out with a card that will beat an NVidia card, they'd only need to undercut green team by £/$100 to be a sales success. Those thinking or hoping a 1080ti killer will be £400 are delusional. Such a card will be £550-600 easily.

Never seems to owrk like that for AMD though as Nvidia just drop their card by $100 to match it and for whatever reason, even when the AMD card is faster, if they are similar price Nvidia seems to outsell AMD hugely.

Its only when AMD brings out a gem that undercuts NVidia by way more than they can drop that AMD have a raging success on their hands eg the 4850/4870
 
glofo is pretty efficient as long as you dont push the clock way beyond a certain point, where power draw become crazy compared to clock and performance scaling, RX480 at ~150watt, and GTX 1060 at ~ 120watt, that's nowhere near twice the efficiency.
as long as AMD doesn't give in to the pressure and OC Vega beyond what the node allow it to, efficiency should be within 20-30watt from 1080/ti equivalent, 220-250watt for nvidia to 250-300watt for AMD, that's what i am hoping for at least, then if ppl want performance over efficiency they can OC, but AMD shouldn't make the choice for them.

Except the rx580 is not really anywhere near 150w:
https://www.techpowerup.com/reviews/Sapphire/RX_580_Nitro_Plus/28.html
234w avergae gaming for th RX580 in boost, 214w in quiet mode.



Performance per watt: https://www.techpowerup.com/reviews/Sapphire/RX_580_Nitro_Plus/31.html
The 1060 is 180% of the RX580, thats close enough to twice. The 1080 is at twice the performance per watt.



My other comparison, the RX480 has similar performance per watt as the 970 despite the latter being built on 28nm.


Anyway you dice it AMD has a long way to catch up. That may well be somewhat down to global foundries but that is what Vega will be produced on. There has been no significant improvement with GF's 14nm process as witness by the RX580. I suspect you are right that clock speed is an issue but then this goes back to my earlier point that if Vega runs at 1500MHz how much power must it be drawing given the power explosion Polaris is suffering form at higher clocks. You said yourself that GF process is to blame, in which case vega will suffer the same fate most liekly. And then you start seeing some reports of 1200MHz samples, which on the face of it is meaningless but if you see what Polaris does increasing clock form 1200 to 1500 MHz and maybe there is something behind that .



I expect Vega to have a good boost in efficiency but they just have a lot of ground to cover and a process that just doesn't seem well suited for large high -frequency GPUs.

AMD themselves have stated the MI25 GPU is 300w at 1500MHz, and that probably works out scaling Polaris and subtracting a decent performance gain and HBM2 savings. Most consumers wont care that much comparison a 300w Vega10 and a 230w 1080ti. My biggest concern would actually be yields for chips like that. The fastest Vega might be biting on the heels of the 1080ti but quite rare and similarly priced. While a 1200MHz that ics more 1080 performance might be a real bang for buck card.
 
That's not going to happen and I doubt AMD would take up G-sync if NV allowed them to. I know AMD made theirs an open standard but probably because they were not first to release the technology? .



Freesync isn't an open standard though, Freesync is AMD's marketing terms for their drivers and software which are closed. AMD is leveraged an industry standard to support their closed source solution. And the free refers to free form licensing, not free of actual cost.


I expect NVidia will release a new Gsync system in the future that also uses the Adaptive sync standard but only if they feel particularly threatened by AMD. At the moment they aren't loosing any sales due to Gsync
 
Last edited:
hard to say, the performance headroom left for nvidia in 16nm node is 20% performance uplift over TitanXp, 30% over Ti, if they release a 600mm² chip, but that is unlikely seeing how they just released these 2 cards a month ago, they would need at least another 6 months before they dethrone it with another card.
if volta is being released in 2H 2017 on 16nm instead of 10nm as they were planning, then it would replace low-mid range, from 1050 to 1080, where vega is supposed to be faster.
AMD might have skiped high end with polaris gen, that doesn't mean vega is going to be outdated, especialy if you look at it through performance metric as you seem to do.


Nvidia is using a 12nm TSMC process (a new derivative of the 16nm/20nm planar) for Volta. there is a lto more performance potential even with the current 16nm process. Pascal has conservative clocks, power consumption and size.
 
AMD themselves have stated the MI25 GPU is 300w at 1500MHz, and that probably works out scaling Polaris and subtracting a decent performance gain and HBM2 savings. Most consumers wont care that much comparison a 300w Vega10 and a 230w 1080ti. My biggest concern would actually be yields for chips like that. The fastest Vega might be biting on the heels of the 1080ti but quite rare and similarly priced. While a 1200MHz that ics more 1080 performance might be a real bang for buck card.

The Tesla P100 is also a 300W card with less overall performance than the GTX 1080Ti, and less TFLOPs than MI25.

The price for compute performance, in FP64, and double FP16. We know the MI25 has double FP16, we just don't know if it also has FP64 performance that can rival it.
We know it has a higher TFLOP count for FP32 and 16 at least; and both use HBM2.

It'll certainly be interesting to see what AMD manages to do with Vega really; especially if they had to put some proper FP64 performance in there; which could help explain why Vaga is so big for the MI25.
 
The Tesla P100 is also a 300W card with less overall performance than the GTX 1080Ti, and less TFLOPs than MI25.

The price for compute performance, in FP64, and double FP16. We know the MI25 has double FP16, we just don't know if it also has FP64 performance that can rival it.
We know it has a higher TFLOP count for FP32 and 16 at least; and both use HBM2.

It'll certainly be interesting to see what AMD manages to do with Vega really; especially if they had to put some proper FP64 performance in there; which could help explain why Vaga is so big for the MI25.


Iyt does have FP64 performance, AMD's released documents show that. Vega 10 has 1:16 FP64 performance, so around 0.75TFLOP compared to the P100 which has nearly 6.0TFLOP. I thought vega10 did have reasonable FP64 support but someone corrected me on these forums the other day. Vega10 and P100 have basically the same Fp16 and Fp32 performance (24.0 and 12.0). The only unknown aspect is if the lower Pascal models actually have packed Fp16 math like the Gp100. Nvidia weren't very clear. Testing shows there defintiely isn;t the performance but it coudl be disbaled at the driver level. FP16 is incredibly useful for deep learning and has some use for gaming (but that has to be coded for).


When you say the P100 has less performance than the 1080ti its not really an apples to apples comparison since the P100 is a dedicated HPC chip, doesn't even have video IO ports. a lot of die space is given over to FP64 support.


the Mi25 also has much less bandwidth than the GP100. (512 vs 720 GB/s) For HPC and deep learning application bandwidth is really important which is why Nvidia invested in HBM for the Gp100, but not for consumer pascal. That is assuming Vega10 can only interface with 2 stacks. as AMd has shown. Potentially the Vega10 could support more channels but that will take die space and an appropriate memory controller.

I exp;ect Mi25 to be massively cheaper than anyTesla GP100 solution, and for deep learning will be excellent value for money but AMD wont see widespread u[take for HPC markets without the FP64 performance.
 
Last edited:
Iyt does have FP64 performance, AMD's released documents show that. Vega 10 has 1:16 FP64 performance, so around 0.75TFLOP compared to the P100 which has nearly 6.0TFLOP. I thought vega10 did have reasonable FP64 support but someone corrected me on these forums the other day. Vega10 and P100 have basically the same Fp16 and Fp32 performance (24.0 and 12.0). The only unknown aspect is if the lower Pascal models actually have packed Fp16 math like the Gp100. Nvidia weren't very clear. Testing shows there defintiely isn;t the performance but it coudl be disbaled at the driver level. FP16 is incredibly useful for deep learning and has some use for gaming (but that has to be coded for).


When you say the P100 has less performance than the 1080ti its not really an apples to apples comparison since the P100 is a dedicated HPC chip, doesn't even have video IO ports. a lot of die space is given over to FP64 support.

The Tesla P100 ( NVLink version ) 300W
21.2TFLOps FP16
10.6 FP32
5.3 FP64

PCIe P100 ( 250w )
18.6 TFLOPS FP16
9.3 TFLOPS FP32
4.7 TFLOPS FP 64
http://www.anandtech.com/show/10433/nvidia-announces-pci-express-tesla-p100

So the MI25 has it beat 2/3 there at least.

Also the new Quadro GP100 is essentially the Tesla P100, but with a lower TDP, and 1/2 FP64. The TDP is 235W apparently, so it seems better yields and binner has helped NVIDIA there.

http://www.anandtech.com/show/11102/nvidia-announces-quadro-gp100

AMD staying with 1/16 FP64 is worrying a bit really; especially concerning the die size and TDP. It might be GloFo's 14nm process holding them back again.

EDIT: I notice AMD's slides show the MI25 as <300W. Hopefully that's also 250W, but saying <300W could also mean 290W like Hawaii XT was.
04870515906144adb8cebef20bea6659.png
 
Last edited:
Except the rx580 is not really anywhere near 150w:
https://www.techpowerup.com/reviews/Sapphire/RX_580_Nitro_Plus/28.html
234w avergae gaming for th RX580 in boost, 214w in quiet mode.



Performance per watt: https://www.techpowerup.com/reviews/Sapphire/RX_580_Nitro_Plus/31.html
The 1060 is 180% of the RX580, thats close enough to twice. The 1080 is at twice the performance per watt.

My other comparison, the RX480 has similar performance per watt as the 970 despite the latter being built on 28nm.

That's because the RX580 has a lot more shaders under bonnet that eat up power, AMD's problem ever since GNC 1.0 came out was feeding those shaders with work which is limited by their massive CPU/DX11 overhead disadvantage. If AMD support driver command lists and had multi threaded drivers to increase CPU draw calls the the RX580 might have ended up competing against the 1080 so long as AMD could keep all those shader cores feed with useful commands. You should also point out that the card you linked to is an overclocked model which has an obvious effect on power consumption but you are right in saying it's not a 150 watt card as it's predecessor the RX480 is drawing 163 watts according to those graphs.

It probably also explains the lack of any Polaris high end chip, if you look the Fury X it wasn't much faster then a 390X in a lot of cases due to the draw call issue but as soon as it was asked to run a game using a modern API a la Doom under Vulkan we all saw what GNC was capable of.
 
Yep no chance.

This is why I don't want to buy anything Nvidia due to their business practices.

Been eye'ing up an ultrawide LG freesync great price, just need a card to compliment it.

I'd be very careful buying a Freesync monitor from LG.
The support on the majority of LG's Freesync monitors is poor.
You need to make sure you get a Freesync monitor that has LFC (Low Framerate Compensator) support.
If it does not have LFC support find one that does.
 
Back
Top Bottom