• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
So, my take on this is:
  • I think it's pretty clear that Vega FE is being pitted as a competitor of Titan Xp. The SPECViewperf numbers in AMD's own announcement compare these two cards.
  • I'm fine with the concept: it's both a gaming and "semi-pro" card. The "semi-" in pro because you get no certification, no long warranty, etc etc.
  • However, if Vega FE is compared to Titan Xp in compute, it's only fair to do the same for gaming. If I'm to choose between the two and I need both compute and gaming ability, I will have to compare the cards on both aspects.
  • It seems to me (just guessing) that even though 'the compute performance is there', the flip-side is that 'the gaming performance is not quite there'.
  • As a result, they're showing the card doing CAD work, but asking sites not to bench it for games.
My suspicion is that Vega with all its changes (NCU, HBCC, new geometry processors, etc) has resulted in extensive changes being needed on the software front. The gaming drivers are likely 'just not ready' and AMD still hopes it will be able to unlock considerably more performance.

They just don't want to put themselves into the same situation the RX480 was where launch-day drivers did a lot of injustice to the card's true potential.

Now, if Vega FE is within 15% to 20% of a Titan Xp in gaming and they hope to get another 10% from drivers, I can understand it.

But if Vega FE is like 30% slower in gaming, another 10% from drivers would just be saving them a bit of embarrassment.
 
So many people saying it's GF fault that Polaris needs so much power whereas it's AMDs great design that Ryzen is so efficient. Just strange that they both are on the same process and in one case it's gfs fault and in the other case amd is so good. I would say the process isn't as bad as people try to make it, else Ryzen wouldn't be so good. It's just that the GPU Design isn't great.

The GNC architecture is pretty good (some parts aren't great like geometry shading) if software/games are written in such a way that can use of it's features. The biggest problem that I can see is the CPU latency, GNC cards (and prior designs) sent draw calls to and from the CPU on one thread with video cards demanding more and more draw calls even highly overclocked CPU's can't keep up. Nvidia on the other are able to multithread incoming game code across multiple CPU threads via driver command lists which increases the draw calls to the GPU and it's a massive advantage for the green team as it means more of that theoretical performance can be used to driver games at higher frame rates. AMD on the hand uses deferred commands (I may have mixed up deferred command and command lists) which essentially means it's up to the developers to properly multi thread the game code, AMD can't do anything about this in the drivers as the hardware doesn't support driver commands. Until AMD address this software imbalance they will never catch up with Nvidia or in order to stay competitive design and release big chips with HBM which draw a ton of power (Vega). On the plus side for AMD it makes getting day 1 game ready drivers a lot easier and their very stable as they don't have the complications of re-writing game code for every new release also AMD have shown what GNC can do under DX12 and Doom, the way I look at it is GNC was a design well ahead of it's time.

The below is the difference maker, it allows Nvidia to produce cheaper small GPU's, it allowed a GTX970 which only had a TDP of 148 watts go up against the 290/390 which pulled 250 watts.

TbnOGku.jpg


People go on and on about GPU design and new architectures but it's the software that really matters and the fact is Nvidia just does it better as they have the resources to do this and AMD doesn't.

I suppose the approach is just a reflection of how each company operates where AMD has an open approach and relies on others to do the work just like with their 3D tech, Freesync, True Audio and deferred commands whereas Nvidia does everything in house with G-Sync, 3D Vision, command lists etc.
 
Last edited:
The GNC architecture is pretty good (some parts aren't great like geometry shading) if software/games are written in such a way that can use of it's features. The biggest problem that I can see is the CPU latency, GNC cards (and prior designs) sent draw calls to and from the CPU on one thread with video cards demanding more and more draw calls even highly overclocked CPU's can't keep up. Nvidia on the other are able to multithread incoming game code across multiple CPU threads via driver command lists which increases the draw calls to the GPU and it's a massive advantage for the green team as it means more of that theoretical performance can be used to driver games at higher frame rates. AMD on the hand uses deferred commands (I may have mixed up deferred command and command lists) which essentially means it's up to the developers to properly multi thread the game code, AMD can't do anything about this in the drivers as the hardware doesn't support driver commands. Until AMD address this software imbalance they will never catch up with Nvidia or in order to stay competitive design and release big chips with HBM which draw a ton of power (Vega). On the plus side for AMD it makes getting day 1 game ready drivers a lot easier and their very stable as they don't have the complications of re-writing game code for every new release also AMD have shown what GNC can do under DX12 and Doom, the way I look at it is GNC was a design well ahead of it's time.

Much of the above is due to limitations in the API and becasue AMD have Hardware based scheduling. The only reason that Nvidia can get around it is becasue they essentially bypass this API problem by having moved the scheduling into the driver. hence they stripped hardware scheduling from their architecture. There is no problem with GCN accepting a far higher rate of or multiple streams of drawcalls, its purely an API bottleneck. which was shown when AMD released mantle.
 
Ahh nice to see the usual. Disagree your a troll. Post something to back your self up your a fanboy. Over what? Nonsense really. Both the Titan XP and FE both intended to game but also both intended for professional uses. They are very similar indeed. Just agree on that and leave it.
 
Anandtech has put up some specs:

http://www.anandtech.com/show/11583...hes-air-cooled-for-999-liquid-cooled-for-1499

dcYU7FN.png


Some things jump out - it seems the HBM2 they are using is a bit higher speed than the original leaks,the TDP is lower than rumoured and the clockspeeds seem quite high for an AMD card at upto 1.6GHZ,which means there must be some significant changes under the hood when compared to Polaris which is on the same process node. The specs are also for the air cooled card and not the AIO water cooled one.
 
Presumably the liquid cooed one has bumped clocks, damn well better have a good AIO on it this time.

I suspect along with the clock bump, it also will have a much better base/boost ratio. So closer to the Air cooler's peak, than base.

Otherwise that price increase is mess, and they can bugger off. :P
 
Anandtech has put up some specs:

http://www.anandtech.com/show/11583...hes-air-cooled-for-999-liquid-cooled-for-1499

dcYU7FN.png


Some things jump out - it seems the HBM2 they are using is a bit higher speed than the original leaks,the TDP is lower than rumoured and the clockspeeds seem quite high for an AMD card at upto 1.6GHZ,which means there must be some significant changes under the hood when compared to Polaris which is on the same process node. The specs are also for the air cooled card and not the AIO water cooled one.

I just noticed that in the Fiji->Vega transition the raw computing power in FLOPs relies purely on clock speed. What I mean is, the Fury X and Vega FE are both 4096 SPs and differ only in clock speed.

Meanwhile Nvidia went from Maxwell Titan X (GM200) with 3072 cores, to the Pascal Titan Xp (GP102) with 3840, which is a significant jump.

Bottom line is that the 13.1 TFLOPs depend on that 1600MHz being sustainable. If we take the base clock of 1382MHz as a worst case, that comes to 11.3 TFLOPs for Vega FE. The Titan Xp is at 12.1 TFLOPs at its boost clock of 1582MHz which I believe can be sustained. So it seems to me that the cards will be quite similar in raw computing power.

It would be nice if we could see Titan Xp with professional drivers. It seems to me that Vega FE beats it just due to drivers (Titan Xp is benched with gaming drivers as I understand).

If we know anything, TFLOPs translate to much higher gaming performance for Nvidia so the Titan Xp should be far ahead of Vega FE. Only this seems not to be the case based on the little we've seen.

Ultimately it will all come down to how much Vega FE's geometry processor, memory architecture and drivers have improved. If Vega FE is close to Titan Xp then AMD have truly closed the gap further.
 
I just noticed that in the Fiji->Vega transition the raw computing power in FLOPs relies purely on clock speed. What I mean is, the Fury X and Vega FE are both 4096 SPs and differ only in clock speed.

You're missing a hell of a lot of variables there.

Edit: Actually ignore me there. If you're talking purely FLOPS then yeah, I misread =(
 
Last edited:
The Fury X had a great AIO, what reviews did you read?

So you missed the whole issue of the pumps emitting a high pitched whine and months later after amd had claimed they fixed the problem, it still existed. It was basically luck of the draw if you got one with it or not. I had 2 cards close to launch and both had it, and nearly a year later another couple for other builds and they also had it. Plenty of youtube videos on it and tech sites talking about the issue, not sure how you managed to miss it as it was something that was mentioned in quite a few reviews.

Example



It might not sound like much in the videos but in person it was immediately noticeable and got annoying fast.
 
You're missing a hell of a lot of variables there.

Edit: Actually ignore me there. If you're talking purely FLOPS then yeah, I misread =(

Yeah, all I'm saying is that we're used to X FLOPs Nvidia card performs like >>X FLOPs AMD card.

The 1060-6GB and the RX480 are very evenly matched in gaming so make for a nice comparison.

The 1060 has 1280 cores vs 2304 SPs for the 480. The 1060 is 4.3 TFLops (4.8 mostly with GPU boost 3.0) and matches the 5.8 TFLOPs of the RX480 (due to all these other factors I'm omitting).

Meanwhile Titan XP and Vega FE are 3840 and 4096 respectively and seem to be very close in TFLOPs. Vega FE should be considerably slower in games (even with gaming drivers). If it's not, then AMD are onto something...
 
Aye, sorry for getting wrapped up in it. I'm done with it, with those two final posts. Now to wait for Frontier Edition independent review results.

Personally, I enjoy your input, it's interesting and relevant, thanks!

Really not worth your time arguing with that ignorant troll though, just put him on your 'ignore' list (along with loadsamoney, doom etc), like I have. Makes this forum so much more pleasant :-)
 
I just noticed that in the Fiji->Vega transition the raw computing power in FLOPs relies purely on clock speed. What I mean is, the Fury X and Vega FE are both 4096 SPs and differ only in clock speed.

Meanwhile Nvidia went from Maxwell Titan X (GM200) with 3072 cores, to the Pascal Titan Xp (GP102) with 3840, which is a significant jump.

Bottom line is that the 13.1 TFLOPs depend on that 1600MHz being sustainable. If we take the base clock of 1382MHz as a worst case, that comes to 11.3 TFLOPs for Vega FE. The Titan Xp is at 12.1 TFLOPs at its boost clock of 1582MHz which I believe can be sustained. So it seems to me that the cards will be quite similar in raw computing power.

It would be nice if we could see Titan Xp with professional drivers. It seems to me that Vega FE beats it just due to drivers (Titan Xp is benched with gaming drivers as I understand).

If we know anything, TFLOPs translate to much higher gaming performance for Nvidia so the Titan Xp should be far ahead of Vega FE. Only this seems not to be the case based on the little we've seen.

Ultimately it will all come down to how much Vega FE's geometry processor, memory architecture and drivers have improved. If Vega FE is close to Titan Xp then AMD have truly closed the gap further.

I mentioned this earlier, the on-paper spec of Vega would put the theoretical FP32 performance lower than the TXP and 1080ti, considering the Nvidia cards can always hit the boost clocks.

Traditionally AMD have required a much higher theoretical compute performance to match Nvidia, so naively this looks quite worrying for Vega. HOWEVER, AMD has supposedly done a huge amount of work to address that. We will have to wait for reviews to find out.
 
It would be nice if we could see Titan Xp with professional drivers. It seems to me that Vega FE beats it just due to drivers (Titan Xp is benched with gaming drivers as I understand).

If we know anything, TFLOPs translate to much higher gaming performance for Nvidia so the Titan Xp should be far ahead of Vega FE. Only this seems not to be the case based on the little we've seen.

Ultimately it will all come down to how much Vega FE's geometry processor, memory architecture and drivers have improved. If Vega FE is close to Titan Xp then AMD have truly closed the gap further.

Titan Xp doesn't have professional drivers.

Your second statement, I'll leave this interesting video to watch https://www.youtube.com/watch?v=owL_KY9sIx8
 
Interesting Tid bit, the drivers for Frontier Edition are wholly unique to it. It really isn't Radeon Pro, or Crimson ReLive.

Crimson Relive version 17.4.4

sch3Zc29TZC0KaIGWnq-Yw.png


Radeon Pro version 17.Q2.1

dkMnM1ZSQt_zlIkRCKL63w.png


Radeon Frontier Edition version 17.20

TtKWlQJNRu6viYLb_eMuzw.png
 
So we still know next to nothing about this card that has now been "released" and nothing at all about what the RX variant(s) will be like (and those are the ones that will matter to almost everyone who's thinking about buying a Vega card since the Frontier variants start at £1000).

A bit more of this and I'm just going to buy a 1070. Those exist and can be bought and we know what they can and can't do.
 
So we still know next to nothing about this card that has now been "released" and nothing at all about what the RX variant(s) will be like (and those are the ones that will matter to almost everyone who's thinking about buying a Vega card since the Frontier variants start at £1000).

A bit more of this and I'm just going to buy a 1070. Those exist and can be bought and we know what they can and can't do.

Well if you need a GPU right now, just get it. Radeon RX Vega cards are only getting revealed end of July; we don't even know if that's a paper launch or proper one.
 
Status
Not open for further replies.
Back
Top Bottom