• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Ok so in your mind what about the AMD RX480 is that a reasonable step over the outgoing AMD380 ?


Just a hint both the 480 (replaces the 380) and 1080 (replaces the 980) are about 35-40% faster than the card they are replacing. ;)

Don't think you can assume that the RX 480 replaced the 380 just because it has '80' in the name, as it was priced way higher than what most would expect for that segment. I'd argue that we should consider it a hybrid between the old x80 and x90 series, as they didn't, and still don't, offer anything for the higher end market.
 
So basically your speculation trumps all - it's almost like you want them to spend less on GPU development for some reason. Only on ocuk forums do companies spend more on modifying existing uarchs then making new ones from scratch.

You might want to look at how often Intel has developed new uarchs from scratch as opposed to the iterative way of doing things. Intel has been conservative for a reason.
It's the same with lots of modern engineering - it's why we still use multistage rockets and not SSTO.

Also the fab market has changed as there has been major consolidations and Intel is hitting problems too. Intel's competitors are closer than they have been for years.

Ultimately amd has one window of opportunity next year TBH. It's a blessing in disguise that kl is 14nm and desktop 10nm chips wont be around for s while.

Plus regarding Vega you might want to go onto beyond3d and talk to some of the posters there - they appear to have more of a clue than what many are purporting to have here.

I am going by what is being discussed over there.
No, my speculation doesn't trump all. I wasn't making the claims about how much of Intel's R&D was getting spent on GPU's. That was you. I merely asked if you had a source to back that up. I'm guessing you dont. And your comment clearly suggests that it was nothing but mere speculation.

As far as budgets required for developing new architecture vs updating new one, that's no so clear cut as you might think. Not when you're already at the leading edge of things and are hitting massively hard diminishing returns. At that point, you have to spend more and more and more to get less and less and less. Costs can quickly spiral upwards dramatically just retain that edge. You'd also have to be very naive to think Intel dont invest heavily in researching new avenues. I have no doubt if they saw some new architectural direction would give them more of an advantage, they'd have taken it by now.

In terms of Vega, 'go talk to people who sound like they know more about this than you do' is not an argument. That's an admission of ignorance. And I have actually read quite a bit about Vega, but we dont actually *know* very much. Lots of speculation, which when it comes to AMD, I've become highly sceptical of. And not just here, trust me, this is *hardly* my only source of info/rumors on processing tech. This place seems more inclined towards well off consumers than truly techy geek-types. And dont get me wrong, I'm not saying I dont believe it. I've long suggested that Polaris was merely a 'half step' architecture, despite how AMD tried to sell it, and that Vega would be the full fat new architecture(though still GCN-based) that we were basically being sold on with Polaris. Trust me, I'm far more excited for Vega than for Polaris. But I'm still very skeptical.
 
Last edited:
I've long suggested that Polaris was merely a 'half step' architecture, despite how AMD tried to sell it, and that Vega would be the full fat new architecture(though still GCN-based)

I agree with this, I thought from the slides in may(or when ever the presentation was) that Polaris was more of a jump than it is and that the high changes are with Vega and not Polaris.
 
very naive

In terms of Vega, 'go talk to people who sound like they know more about this than you do' is not an argument. That's an admission of ignorance.

Not really - maybe you should go onto those threads,since there are people who work in the area,etc who have some very interesting discussions over there and maybe you can read the discussion instead of calling people ignorant and that is again you saying your knowledge trumps everybody else via a different way,like calling anyone who does not see your way as naive.

You can't can't even back up any of you assertions either so show ME evidence that AMD won't redirect more of its R and D resources towards GPU development after Zen launches. I ask for your evidence then.

Only in your world does a brand new CPU design which needs more validation for errors and performance need as much money as iterative improvements - yet the whole engineering world contradicts you.

Hardware enthusiasts complain that Intel is not pushing large improvements at every generation since they don't get that iterative improvements are a safer bet - Intel has only had five major ground up new uarchs in nearly 20 years and two of those do not even target what we are interested in. Pentium PRO,P4,Core,Atom and Itanium. The rest have been gradual improvements over time.

Yet,P4 was a failure,Itanium was a failure and Atom is close to being a financial failure.

That explains why Intel is sticking with the "if its not broke don't fix it mentality"whilst enthusiasts start moaning at them.

Risk=cost.

AMD has had a few ground new arches since 2000,the original Athlon,Brazos and Bulldozer. The Athlon 64 has its lineage in the original Athlon. The Phenom had its lineage in the Athlon 64. Bulldozer was a failure and nearly destroyed them.

Risk=cost.

New uarch=more risk.

Iterative improvements=less risk=less cost.

This is why both AMD and Nvidia did the same with Pascal and Polaris.

The last ground up fully new GPU designs from AMD and Nvidia are GCN and Fermi. Everything since has been derivatives(although Maxwell is somewhat of a halfway house with some major changes especially with Tile-based Rasterization).


Edit!!

Do you even realise the Athlon 64 was a derivative of the first Athlon??

The follow up K9 was cancelled and the Phenom series still has its roots in the Athlon 64 and Athlon. Cost was the reason.

So if developing new uarchs required as much r and d spend as iterative developments we should see AMD and Intel pushing them much more often. History does not agree with you.

Plus new uarchs need the software to catch-up with them too - so ultimately that is the other side of the equation. It makes very little sense to keep pushing new uarchs all the time if you have a solid one already.

No,I know why people want to push that AMD HAS to spend as much money on Zen+ as with developing Zen and so on,then they can spin the line AMD won't spend more developing new uarchs on the graphics side and say AMD is doomed for eternity with graphics.

You know what we can agree to disagree.
 
Last edited:
Absolute rubbish! A 1070 is much more efficient than a 480. AMD improved their efficiency but no where near the claims / rumours.

The real nonsense is exclaiming 10 watt differences as majorly more efficient.

The RX 480 sits in a 20-30 watt gap between the GTX 980 and 1060 despite having more memory on the board, and all three perform at similar levels. So, evidently, any efficiency differences are indeed negligible unless those 10 watts have now become reason for shouting blue murder around here.

 
The real nonsense is exclaiming 10 watt differences as majorly more efficient.

The RX 480 sits in a 20-30 watt gap between the GTX 980 and 1060 despite having more memory on the board, and all three perform at similar levels. So, evidently, any efficiency differences are indeed negligible unless those 10 watts have now become reason for shouting blue murder around here.


You cannot really compare efficiency without taking performance into account:


Techpowerup
 
Last edited:
Not really - maybe you should go onto those threads,since there are people who work in the area,etc who have some very interesting discussions over there and maybe you can read the discussion instead of calling people ignorant and that is again you saying your knowledge trumps everybody else via a different way,like calling anyone who does not see your way as naive.
That's not what I'm doing at all. I'm not calling *anyone* who has a different perspective than me naive. I'm saying your specific claim of AMD being able to back off R&D after Zen releases naive. Just that. Please dont make lousy sweeping accusations like that. Not helping any constructive discussion here.

You can't can't even back up any of you assertions either so show ME evidence that AMD won't redirect more of its R and D resources towards GPU development after Zen launches. I ask for your evidence then.
That's not how burden of proof works dude. I did not make the claim. It's up to you to make the case for it.

My assertion, that AMD will still have to spend mightily to keep up with Intel on the CPU front, can be shown with Intel's crazy high R&D budgets. If you want to say that's unrepresentative because most of that is going towards GPU's or whatever, then show that's true. Otherwise it's not proving me wrong, it's just you speculating, which you basically admitted already was the case.

Only in your world does a brand new CPU design which needs more validation for errors and performance need as much money as iterative improvements - yet the whole engineering world contradicts you.
So you're just going to ignore where I already commented about this?

Hardware enthusiasts complain that Intel is not pushing large improvements at every generation since they don't get that iterative improvements are a safer bet - Intel has only had five major ground up new uarchs in nearly 20 years and two of those do not even target what we are interested in. Pentium PRO,P4,Core,Atom and Itanium. The rest have been gradual improvements over time.
Hardware enthusiasts complain about things they dont know about all the time. Assuming that Intel's iterative product releases aren't ground-breaking because they are simply 'playing it safe' is very much a great example of that. As if there is something amazing that we aren't getting because Intel is just sitting back, twiddling their thumbs. Whereas in reality, they are pushing forward as hard as they can and practical and physical limitations are simply to the point where big gains just aren't possible anymore. GPU's may well find themselves in a similar position before too long. It will not be Nvidia or AMD's fault when that happens.

No,I know why people want to push that AMD HAS to spend as much money on Zen+ as with developing Zen and so on,then they can spin the line AMD won't spend more developing new uarchs on the graphics side and say AMD is doomed for eternity with graphics.
There is no intention on my part to 'spin' anything. I have no agenda here or any dog in this fight. I'm just trying to assess reality, that's all. I find this stuff interesting and I like to talk about it.

You seem to be ignoring everything I'm saying about NOT thinking AMD is doomed with their GPU's or anything like that. Skeptical is not 'doomed'. It's rational skepticism, that's all. Just keeping open the possibility that maybe Vega isn't quite what some are cracking it up to be, which has happened with the last two new AMD architectures, if you'll remember. I've got *nothing* against AMD. Just pointing out the reality, is all. It seems you're the one who is overly concerned with that kind of thing, which may well speak more for your own agendas in this argument.
 
Last edited:
The real nonsense is exclaiming 10 watt differences as majorly more efficient.

The RX 480 sits in a 20-30 watt gap between the GTX 980 and 1060 despite having more memory on the board, and all three perform at similar levels. So, evidently, any efficiency differences are indeed negligible unless those 10 watts have now become reason for shouting blue murder around here.


LOL, going by that chart you have linked to there is a 23% difference between the 1060 and the 480, that certainly isn't negliable. :rolleyes:
 
Last edited:
Says who? :confused:

Emm,have you kind of missed where it was priced at??

You do realise the R9 285 2GB was priced at $249??

The R9 380 4GB rebadge was priced at $199.

The R9 380X 4GB was priced at $230.

The RX480 4GB which AMD was making a big deal at was marketed as being a $199 card and the 8GB version at $240.

We had a massive drop in the pound so that didn't help,but the RX480 was a R9 380/R9 380X replacement.

Its no different than a GTX1060 being a GTX960 replacement either.

The GTX960 2GB launched at $200. The GTX1060 6GB FE was $299 but aftermarket models started at $249.

The GTX1060 3GB started at $210.

Sure we had EOL deals on the R9 390 and GTX970 but their true replacements are the GTX1070(AMD has not shown up yet with their replacement).

:p
 
Last edited:
Obvious product line comparison. Similar naming and price category.

Can't remeber the last time amd released an x80 card for 300€+. Would argue that it's more of a hybrid between the old x80 and x90 than a direct replacement for the 380-series, but it doesn't really matter though as it is competing in the same space as the gtx 1060 and older gtx 970/980/290/290x/390/390x, and most of those offer more performance per dollar. The Rx 480 isn't bad by any means, but it isn't great either. Best AMD card on the market right now though is the Fury, and I still don't get why AMD decided on only releasing the horrible hybrid-cooled fury x instead of the actually great fury. The fury would have been the perfect card between the gtx 980 and 980 ti at say around 450€ and would have kept many from picking the gtx 980 ti over the fury x.
 
You cannot really compare efficiency without taking performance into account:


Techpowerup

You quoted me, so you can see I did with a recent article that likely doesn't use data from when the 480 had initial power issues, and while not using a 1070 straw man argument that overlooks base board power requirements usually being proportionally lower for higher tier cards.

LOL, going by that chart you have linked to there is a 23% difference between the 1060 and the 480, that certainly isn't negliable. :rolleyes:

Small differences generally tend to show much higher percentages when using small numbers.

Our schools these days. :rolleyes:
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom