• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
I died :D

faf1c5452c7eab7e.jpg

:confused:
 
AMD Vega is 60% higher clocked, almost twice the chip size, for 20% the performance at 2560x1440 over the FuryX. (AMD presentation slides not mine).
And that difference goes south, when compared to custom watercooled furyX with +96mv at 1237 clock. But is diminishing returns since at +48mv can do 1190/600 with stock AIO.

Adding transistors to make it burn more power doing nothing, seems false economy. They could have added transistors to the Polaris and call it a day, because is far more efficient GPU.

Hun Vega isn't almost twice the chip size of fury X. You know going from 28 to 14nm doesn't scale perfectly. IPC loss is to be expected on a node shrink, how much IPC loss occurs would depend massively on the arch I would suppose.
Yes Polaris is more efficient than Fiji, but again on a way smaller die with 43% less shader units, where speeds are attained way easier with less voltage (bigger dies also get diminishing returns to some extent) If they'd have made Polaris bigger with 4096 shader units, they probably would have obtained similar results, coz there isn't night and day gains between Fiji and Polaris either on any front...

Not all Fury X's get those clocks, mine doesn't can't get more than 1150/550, no matter what I throw at it, and just like Fiji some Vegas will clock higher than others. AMD (and Nvidia) have to assure stability for everyone, not on a per card basis, hence why you can always overclock a bit.

While adding transistors that look like they are doing nothing looks like a false economy, and rightfully so, these transistors are not doing nothing as you say, they are allowing the chip to clock higher understand that they didn't have a choice otherwise they wouldn't have had the necessary clock speed to be "competitive" this is a fact, and verifiable. if those transistors were not there, Vega would have way lower clocks.
Now if Vega had have been a ground up new arch things would have probably been very different, but it isn't and they had to do good with what they had at hand. They probably started working on Vega before Fiji even hit the market.

At the end of the day, the fact of the matter is that GCN is an aging architecture, they can enhance things as much as they want, and change things in the arch to try and get around "problems", but all in all this architecture will always be overall weaker for gaming. Now infinity fabric might do them really well, but Nvidia are also working on their own "Infinity fabric" (and probably have been doing for quite some time).
There's a reason AMD have gone full on pro raw compute and all that, because that is where their architecture is the most efficient on AMD standards.
 
Last edited:
AMD claim Vega supports 'Infinity fabric interconnect'. Any one know what that is?

I do think there might be some magic we will see in future drivers. The technology in Vega is there. It just needs tapping in to.

Interesting.

Actually I thought it might mean there might be some sort of union of GPU and CPU but alas that's not what they mean.
 
Last edited:
AMD claim Vega supports 'Infinity fabric interconnect'. Any one know what that is?

I do think there might be some magic we will see in future drivers. The technology in Vega is there. It just needs tapping in to.

Interesting. "Flexible coherent interfaces across GPU and CPU cores!

That is where the problem is with GCN most things have to be tapped into to be good, most of the good stuff are developer dependent (I wouldn't be surprised if TBR is somewhat developer dependent otherwise there wouldn't be a need for an alternative), and that is a problem when your GPUs only occupy a small part of the market.
That being said I do expect Fury's and Polaris and all to be still quite relevant and decent, when Nvidia really decides to embrace other APIs.
 
AMD claim Vega supports 'Infinity fabric interconnect'. Any one know what that is?

I do think there might be some magic we will see in future drivers. The technology in Vega is there. It just needs tapping in to.

Interesting.

Actually I thought it might mean there might be some sort of union of GPU and CPU but alas that's not what they mean.

Simplistically IF in its current form is largely a general purpose interconnect that allows monolithic self contained processing packages (whether CPU or GPU) to talk to each other - the most immediate benefits will be for the professional markets where they'll be working with data sets, etc. that aren't complicated by the intricacies and realtime dependencies of game data and can require far bigger data transfers and batch processing at a completely different level to gaming use.
 
AMD claim Vega supports 'Infinity fabric interconnect'. Any one know what that is?

I do think there might be some magic we will see in future drivers. The technology in Vega is there. It just needs tapping in to.

Interesting.

Actually I thought it might mean there might be some sort of union of GPU and CPU but alas that's not what they mean.

Unfortunately nothing was announced on the RX Vega launch, other than on Linus presentation he was hinting better CF support :(
Raja has said Vega supports IF but thats all.
 
Do yourself a favour, as soon as anyone posts idiotic images or gifs, put them on ignore. Do the same for the relentless trolls and your browsing experience will increase massively. Ironically a lot of the people I have on ignore only post drivel in the Graphics Card forum. Outside of this sub forum some of them are perfectly rational.
What was wrong with that, I thought it looked quite good and very on topic.
 
Simplistically IF in its current form is largely a general purpose interconnect that allows monolithic self contained processing packages (whether CPU or GPU) to talk to each other - the most immediate benefits will be for the professional markets where they'll be working with data sets, etc. that aren't complicated by the intricacies and realtime dependencies of game data and can require far bigger data transfers and batch processing at a completely different level to gaming use.

I was hoping it would mean something like that. Sort of CPU-GPU co-communication but in reality I think what is meant is more like Epyc. It's technology to put together multiple smaller dies and have them operate as one.

Something I think Navi is going to be. Think Epyc as a GPU.

Hence why there was no mention of CF at the AMD event.
 
Maybe Bethesda games (Lets assume they are working on a new Oblivion as it's next in the series) and FarCry 5 might actually use HBCC.

Both mentioned in the AMD video which everyone has now seen a million times.

Raja says, "We are working with developers to bring the benefits of HBCC" Then dude from Bethesda starts talking about making 'big' worlds.
 
Yes I agree, there is huge potential on the computing side of the Vega 64. Given the exactly the same GPU in the form of Vega FE is beating the TXp we know what is capable for.
But to utilise that for gaming, it requires a lot of effort. However we have what we have atm. :/

I doubt it was intentional but when you have miners buying a £300 AMD card for £600 it looks like a strike of genius to not focus the RX Vega card on gaming assuming that peeps into compute will buy them.
 
Status
Not open for further replies.
Back
Top Bottom