• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
People were saying the exact same thing when we were speculating about Ryzen. What they managed with Ryzen isn't proof that they're sandbagging, but it's why people think there's a chance they can close the gap.

They closed their gap with Intel and in some areas passed them and this has inspired hopefulness that they're intending on doing the same thing with Vega.

As I said, it's obviously not proof of it, but also things aren't adding up at all with what we've seen of Vega so far. It makes no logical sense.

Agree
 
Wow... So many of you are going to be disappointed.

AMD have shown themselves to be around 2 years behind Nvidia. Why some of you think they'll suddenly close this gap completely I have no idea.

And sub £500 for HBM2? Utter delusion.
I think if the FE is a good indication, AMD might just have their worst GFX card release, ever. Worse than the 390 Rebrandeon, worse than Fiji, worse than the 480 aka "fourth coming of the 290 at the same price point".

This release could really sap the last shred of confidence any of us have in AMD's GPU dept.

And for those saying "Well, Ryzen was a success"... they aren't remotely comparable. A whole new CPU built from the ground up with a lot of work by a legendary CPU architect, where AMD funnelled most of their R&D, vs an iterative incremental improvement to GCN, coming hot on the heels of a mildly disappointing Polaris reveal.

Expecting Rzyen's success to have any bearing at all on Vega is just nonsensical.
 
I think if the FE is a good indication, AMD might just have their worst GFX card release, ever. Worse than the 390 Rebrandeon, worse than Fiji, worse than the 480 aka "fourth coming of the 290 at the same price point".

This release could really sap the last shred of confidence any of us have in AMD's GPU dept.

And for those saying "Well, Ryzen was a success"... they aren't remotely comparable. A whole new CPU built from the ground up with a lot of work by a legendary CPU architect, where AMD funnelled most of their R&D, vs an iterative incremental improvement to GCN, coming hot on the heels of a mildly disappointing Polaris reveal.

Expecting Rzyen's success to have any bearing at all on Vega is just nonsensical.
You're missing the point. People are speculating about this, not because they're in denial about Vega but that Vega doesn't make any logical sense. Double the transitors for a marginal increase in speed is unheard of.

It doesn't add up at all, and the reason people are referencing Ryzen is because they did something similar by pretending the IPC was lower than it really was. Intel were caught off guard by Ryzen, so the point is that it's entirely possible that AMD are trying to catch nVidia off guard too, as that's really one of the only ways they're going to start scraping back market share.

But as I said the success of Ryzen isn't indicative of Vega, but it's a suggestion that AMD might be up to something with how it doesn't add up.
 
Exactly. The Engineering sample months ago on a debugging fiji drivers, and debugging pcb with usb monitoring was pushing overclocked gtx 1080 FPS in 4K DOOM.
Now it's between 1070 and 1080 reference on Frontier Edition.

Something's up, and we'll hopefully get some light shone on it Siggraph.

Edit: Wasnt the same ID from the doom demo spotted in Firestrike, and it was running at 1200Mhz as well.

Yes it was.

24UhISp.jpg

XsmRvsz.png
This is another example why current performance doesn't make any sense. Unless amds driver team somehow managed to go backwards after 3 month
 
You're missing the point. People are speculating about this, not because they're in denial about Vega but that Vega doesn't make any logical sense. Double the transitors for a marginal increase in speed is unheard of.

It doesn't add up at all, and the reason people are referencing Ryzen is because they did something similar by pretending the IPC was lower than it really was. Intel were caught off guard by Ryzen, so the point is that it's entirely possible that AMD are trying to catch nVidia off guard too, as that's really one of the only ways they're going to start scraping back market share.

But as I said the success of Ryzen isn't indicative of Vega, but it's a suggestion that AMD might be up to something with how it doesn't add up.


i don't think there is twice the transistors for starters, maybe something like 70% more. I do agree that that is perhaps the most difficult part to explain away, but we don't know what else is in the core. A lot of that transistor budget may be related to HPC and compute, e.g. how much did the HBCC controller take up?


The fact of the matter is, AMd hasn't increased the stream core count form fii at all, and are relying on clock speed improvements. If they failed to hit desired clock speeds due to process issues or other variables then it simply wont live up to the theoretical performance.


vega at its typically clock speed has less compute and less bandwidth than the 100ti. AMD cards have traditionally needed significantly more to be competitive. So on face value we really would expect Vega to be only marginally above the 1080. AMD should have achieved some efficiency improvements, but remember Vega is just the 5th iteration of GCN, it is not comparable with a ground up design like Ryzen Some of the fundamental problems that experts believed limit fiji are still present to some extent. there are still only 4 shader engines trying to feed 1024 GCN cores. Achieving a balanced load and decent geometry throughput may still be a challenge.

Fiji is supposed to have a tile based rendering system for hidden surface removal. But this is very complex and requires very advanced drivers. AMD may have made some design mistakes, or did an Intel Itanium and designed hardware functionality that is incredibly difficulty to program for. There could be very complex interactions between the TBR functionality and AMD's task scheduling. Nvidia removed some of the hardware related to scheduling into the drives to both save a lot of hardware complexity, but to also improve the flexibility with different workloads that can increase performance. It may be that fixed hardware scheduling just doesn't lead to the efficiency gains seen by using a partial software based optimization of the wokload.


In a simialr vain, GCN architecture has never had strong geometry performance and in fact thigns have got relatviely worse sicne AMD have kept the 4 shader engines and simply deepened the stack of GCN cores making h PGU less balanced. the Vega hey haven't increased shader engines to 6 or 8 with a gemoertry engine in each, they kept to the 4. AMD have talked about a new geometry pipeline but details are vague and msot people believe this i=will require explicit programming within the game enigne, or at besst very careful driver




AMD have shown repeatedly recently that they prefer hardware based solutions, & brute performance with a lighter driver stack. Nvidia have flip-flopped between these, but since Fermi have been very much along the line of simplifying hardware and have very complex drivers. The simpler hardware is less liekly to be faulty, can have additional cores or be made smaller and simply run faster - Nvidia GPUs have recently had big clock speed advantages. NV always pride themselves on the software, it is what they excel at. that is why CUDA is the de facto industry standard for compute, and Nvidia harwdare with less theoretical compute can often beat (or at least catch up) AMD"s brute force approach.
 
High end isn't where the profit is. You've just discredited yourself with that one sentence.

Expensive halo products are made to sell more of the lower end ones.
The profit margins on high end products are absolutely huge. Due you really think the Titan XP costs $500 more than the 1080 ti to make?

The total volume is much smaller and so so that will limit revenue, but its easy money. that is why they do it. If there was no healthy profit margins it would be a pointless endeavor. Low end parts will have in tota; a higher volume and net profits will add up, so hat is the meat and portatoes. But the high end pays back a nice chunk of the R&D for relatively little work, as long as you have a good architecture. 1080/Ti/TXP are basically just scaled up 1050/1060 GPUS so the core design costs are reasonable. The work in aking the 1080ti perform as wlel as it does ill be valuable for next generations 1060 product (e.g. Volta 2060). All the while they might get $150 a card sold.
 
This is another example why current performance doesn't make any sense. Unless amds driver team somehow managed to go backwards after 3 month
The wccftech guy got the same performance, so it doesn't mean anything except specific settings make a big difference,
 
You're missing the point. People are speculating about this, not because they're in denial about Vega but that Vega doesn't make any logical sense. Double the transitors for a marginal increase in speed is unheard of.

It doesn't add up at all, and the reason people are referencing Ryzen is because they did something similar by pretending the IPC was lower than it really was. Intel were caught off guard by Ryzen, so the point is that it's entirely possible that AMD are trying to catch nVidia off guard too, as that's really one of the only ways they're going to start scraping back market share.

But as I said the success of Ryzen isn't indicative of Vega, but it's a suggestion that AMD might be up to something with how it doesn't add up.

I think that would have made a lot of sense if it happened before Vega was released.

Since Vega has been released, neither option seems to me to make much sense:

1) Vega has double the transistor count of Fiji and is only marginally better. As you say, that's unheard of. It doesn't make much sense.

2) AMD have launched Vega and are selling very expensive cards that don't fit either the gaming market or the pro market (not certified) and have been deliberately hobbled so the rather poor performance will catch nVidia off guard when AMD undo the hobbling and relaunch Vega at least a month after the first launch, while degrading their reputation and Vega's reputation for a month as their new flagship product looks at best mediocre. "Hey hey hey! You paid £1000 for a card that we hobbled for a month just to fool nVidia. But now we'll send you a real one. You don't mind, do you? Just a prank, mate." That doesn't make much sense either.

I'm leaning towards (1) and AMD somehow bodging the job that much. It doesn't make sense, but I think it's less implausible than the other option.
 
The profit margins on high end products are absolutely huge. Due you really think the Titan XP costs $500 more than the 1080 ti to make?

The total volume is much smaller and so so that will limit revenue, but its easy money. that is why they do it. If there was no healthy profit margins it would be a pointless endeavor. Low end parts will have in tota; a higher volume and net profits will add up, so hat is the meat and portatoes. But the high end pays back a nice chunk of the R&D for relatively little work, as long as you have a good architecture. 1080/Ti/TXP are basically just scaled up 1050/1060 GPUS so the core design costs are reasonable. The work in aking the 1080ti perform as wlel as it does ill be valuable for next generations 1060 product (e.g. Volta 2060). All the while they might get $150 a card sold.
Absolutely not, however the vast majority of people buying don't buy those cards.

They sell orders or magnitude more cards at the lower price points, so even if they are making $150 per chip sold, they really selling so many more that it doesn't even compare.

The profit margin isn't entirely why they make these products. This is why I said earlier that they make the halo products to sell the mainstream ones. People feel nice buying an nVidia card when nVidia currently makes the fastest graphics cards available. They are able to feel that they're getting a little part of that by buying the cheaper cards.

It's the way marketing works.
 
Absolutely not, however the vast majority of people buying don't buy those cards.

They sell orders or magnitude more cards at the lower price points, so even if they are making $150 per chip sold, they really selling so many more that it doesn't even compare.

The profit margin isn't entirely why they make these products. This is why I said earlier that they make the halo products to sell the mainstream ones. People feel nice buying an nVidia card when nVidia currently makes the fastest graphics cards available. They are able to feel that they're getting a little part of that by buying the cheaper cards.

It's the way marketing works.


I think we are talking cross purposes.

I totally agree with what you are saying here and have presented to the AMD fans exactly why AMD needs Halo products.


My point was the high end products absolutely need a high profit margin. They'll earn millions. Cutting prices of the very high end may only net you slightly more sales because as you say, volume is just low anyway. So there is no point in racing to the bottom,you price them to maximize profit and not volume,.and let the mainstream buyers look up in envy and think they are getting a bargain (they are, fps per $)

But the very high profit margins are very attractive, otherwise they would never sell the Quadro card for example.
 
The first guy streaming did some productivity benchmarks near the start of his stream where it did ok to excellent but nothing that would define the card.


Productivity or compute?

In productivity it appears somewhere between a 1060 and 1080 GPU running round drivers.

In raw compute for an HPC type workload I expect it does quite well and gets very close to a 1080ti. In deep learning it may beat everything outside the P100 at 1/10th the price
 
Productivity or compute?

In productivity it appears somewhere between a 1060 and 1080 GPU running round drivers.

In raw compute for an HPC type workload I expect it does quite well and gets very close to a 1080ti. In deep learning it may beat everything outside the P100 at 1/10th the price

Can't remember details now, wasn't paying much attention to the stream at that point, but he did some productivity suite (wasn't software I'm familiar with) that has a compute component - the tests were coming back anywhere from slightly below a 1070 through to 1-2 of them close to or maybe slightly ahead of the Pascal Titan.

I'm leaning towards (1) and AMD somehow bodging the job that much. It doesn't make sense, but I think it's less implausible than the other option.

Its possible there is a certain amount of working around the 14nm process at GF - it seems it optimised towards low voltage operation at low to normal clock speeds and doesn't do well with high frequency, high voltage scenarios - they may be having to pad out a certain amount of the core to allow for that.
 
i don't think there is twice the transistors for starters, maybe something like 70% more. I do agree that that is perhaps the most difficult part to explain away, but we don't know what else is in the core. A lot of that transistor budget may be related to HPC and compute, e.g. how much did the HBCC controller take up?

i did a rough extrapolation of rx580 core 5700mn transistors on 232 mm2.. resulting in 12.3 billion on vega die.. now thats what i am concerned about since the only reason i was waiting for vega was the advertised 15-18 billion transistors.. imo transistor count can determine the long term performance of a chip.. if its not working well out of the box, it will age better.. just curious about your +70% estimate [assuming that you are comparing it with furyx]

and, also, why do you think dedicated circuitry might be needed for HPC? if its machine learning, the forward problem is mostly a monte carlo and hence can be parallelized using generic GPU functionality .. the backward problem, in case of higher order convolutions [lets say of the order of magnitude of 100] too reduces to a monte carlo (unless you can find a closed form distribution) followed by an optimization step. Quick matrix transformations.. like we got to do those Cholesky's for joint normals may require matrix related circuitry.. but i have no reason to believe that vega might house something that specific
 
Last edited:
i did a rough extrapolation of rx580 core 5700mn transistors on 232 mm2.. resulting in 12.3 billion on vega die.. now thats what i am concerned about since the only reason i was waiting for vega was the advertised 15-18 billion transistors.. imo transistor count can determine the long term performance of a chip.. if its not working well out of the box, it will age better.. just curious about your +70% estimate [assuming that you are comparing it with furyx

Raja just confirmed on Twitter that Vega is 484mm2.

So it's really comparable to GP102 in size.
 
What i don't get is, the FE ran DOOM in Vulkan, much slower than the card they showed off months ago doing it (didn't it ?), so either the RX and FE's are different, or something has gone wrong since then.
Exactly. The Engineering sample months ago on a debugging fiji drivers, and debugging pcb with usb monitoring was pushing overclocked gtx 1080 FPS in 4K DOOM.
Now it's between 1070 and 1080 reference on Frontier Edition.

Something's up, and we'll hopefully get some light shone on it Siggraph.

Edit: Wasnt the same ID from the doom demo spotted in Firestrike, and it was running at 1200Mhz as well.

Yes it was.

When that Doom test was run wasn't it shown that an overclocked Fury X quite close? Plus it may of been beating the 1080 but that is in Vulcan where we have a significant advantage. In regular DX11 games going tit for tat it would have been slower.
 
just realised

valve = 5 letters

5 letters + rx vega is 11

siggraph is 8 letters

11-8 = 3


so in actual fact this is all a ploy and rx vega never existed, the real point of all this hype was


half life 3 is launching at siggraph

Damn, you're right.
Hqw7M6S.jpg.png


Wow... So many of you are going to be disappointed.

AMD have shown themselves to be around 2 years behind Nvidia. Why some of you think they'll suddenly close this gap completely I have no idea.

And sub £500 for HBM2? Utter delusion.

They're not going to close the gap completely but they did purposely leave the high end alone last year in order to work on Vega so they could catch up this year.
 
People were saying the exact same thing when we were speculating about Ryzen. What they managed with Ryzen isn't proof that they're sandbagging, but it's why people think there's a chance they can close the gap.

They closed their gap with Intel and in some areas passed them and this has inspired hopefulness that they're intending on doing the same thing with Vega.


AMD didn't sandbag with ryzen. Each test was showing off am aspect of ryzen. I would recommend watching adoredtv video on and sandbagging.

Concerning the doom demo shown earlier this year. We saw a Vega CPU clocked to 1200mhz (?) Trading blows with 1080 (overclocked or stock?).
Is that faster than the fury in doom?
If it is then considering that it was using the fury drivers, that would be demonstrating improved performance on a hardware level. Probably showing better load scheduling across the GCN cores (what are they called?)
 
Status
Not open for further replies.
Back
Top Bottom