• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Vega is two chips apparently, so you should be speculating about 2 cards minimum. Probably cut down versions of each also. You could probably speculate on performance by estimating the size of the chip by comparing previous generations mid to high end and enthusiast chips. I'd guess small Vega will come in at around 1080 performance and big Vega 10-15% faster than that. and pricing starts at $300USD and maybe finishes at around $600USD for big Vega partner cards. Hopefully, they work on the power efficiency a little and I'll be happy with that.
 
Lot of people who know someone in this thread. You should all give nvidia a call, they're probably desperate for some info. Probably bag yaself a titan in the process.
 
Might well swap my 1070 for a Vega gpu if it is faster and the price to change isn't too much as I picked up a mispriced WQHD IPS 144hz freesync monitor the other day and I would quite like to try the variable refresh rate tech (as I wont be able to use it with my 1070).
 
I really don't know why some people on here think that AMD are light years behind Nvidia with the last few generations of GFX cards. All the generations since the 290 haven't been THAT far behind the equivalent Geforce cards.

IMHO it has mostly been down to Nvidia's stronger market share and leverage with game developers than anything related to hardware engineering. I mean if the push towards bare metal APIs, Parallel Queues and Async had taken off in 2013 then we may have seen a market share change before now.

Nvidia's tech does lend itself to extremely high clockspeeds and that is what has helped to keep them ahead. However, I don't think a brute force approach will do that for much longer (Nvidia may prove me wrong on this :p ) as we see the 1080 struggling to go much faster than 2.1/2.2 on water and it does 2.1 on air (if you get a good one).

The way some people on here talk you would think that Nvidia were two generations in front. :D
 
I really don't know why some people on here think that AMD are light years behind Nvidia with the last few generations of GFX cards. All the generations since the 290 haven't been THAT far behind the equivalent Geforce cards.

IMHO it has mostly been down to Nvidia's stronger market share and leverage with game developers than anything related to hardware engineering. I mean if the push towards bare metal APIs, Parallel Queues and Async had taken off in 2013 then we may have seen a market share change before now.

Nvidia's tech does lend itself to extremely high clockspeeds and that is what has helped to keep them ahead. However, I don't think a brute force approach will do that for much longer (Nvidia may prove me wrong on this :p ) as we see the 1080 struggling to go much faster than 2.1/2.2 on water and it does 2.1 on air (if you get a good one).

The way some people on here talk you would think that Nvidia were two generations in front. :D

There is quite a dramatic difference in perf per watt though at the moment. I mean the RX480 uses more power than a 1070 and almost as much as a 1080 and is much slower.

That is why many perhaps doubt just how fast Vega can be without serious progress in that department, although HBM will of course help. In fact I think HBM will be an absolute necessity for them.

For example if you took the Polaris architecture and used it to get the same performance as a 1080 it would be a massively hot and power hungry chip in comparison.
 
Yup, AMD have to make a massive jump in architectural improvements and I'm just not sure they're upto it when you consider how little they will spend on R&D in comparison to nVidia...
 
I really don't know why some people on here think that AMD are light years behind Nvidia with the last few generations of GFX cards. All the generations since the 290 haven't been THAT far behind the equivalent Geforce cards.

IMHO it has mostly been down to Nvidia's stronger market share and leverage with game developers than anything related to hardware engineering. I mean if the push towards bare metal APIs, Parallel Queues and Async had taken off in 2013 then we may have seen a market share change before now.

Nvidia's tech does lend itself to extremely high clockspeeds and that is what has helped to keep them ahead. However, I don't think a brute force approach will do that for much longer (Nvidia may prove me wrong on this :p ) as we see the 1080 struggling to go much faster than 2.1/2.2 on water and it does 2.1 on air (if you get a good one).

The way some people on here talk you would think that Nvidia were two generations in front. :D
Actually the truth is Nvidia can probably be two gens in front if they "wish to", but why bother when they can just release mid-range cards and sell £600+ for it? Bringing out such fast cards at higher production cost make no business sense to them, as the costs involved vs returns just not worth it.

Dip-feed the consumer on performance (relative to fastest potential card they can make which they won't), and deliberately hold back hardware features and save it for the gen after to make people getting the upgrade itch to upgrade every generation or two make much more sense.
 
AMD are light years behind Nvidia, its just not even funny, they need HBM to get near what they can do with little old GDDR5.

Their 480, needs more power than the 1070 to run, and its a lot lot slower, its only 2yr old 970 performance, their 1070, and even their 80 and TXP, can be ran on a potatoe, its bloody laughable :D
 
Last edited:
AMD are light years behind Nvidia, its just not even funny, they need HBM to get near what they can do with little old GDDR5.

:confused:

That has pretty much zero to do with the entire point of hbm, which is mostly power saving and smaller form factors. Its not like it offers some incredible performance boost over gddr5, just a wider pipe. Fury x could have had gddr 5 and i doubt besides the larger pcb and thre ram using a bit more power it would have made much difference.

A lot of reviews are pointing to the fact that fiji had fewer rops than the 980ti, if it had 96 like the ti then most likely it would have been on par if not faster from day one.
 
Last edited:
:confused:

That has pretty much zero to do with the entire point of hbm, which is mostly power saving and smaller form factors. Its not like it offers some incredible performance boost over gddr5, just a wider pipe. Fury x could have had gddr 5 and i doubt besides the larger pcb and thre ram using a bit more power it would have made much difference.

A lot of reviews are pointing to the fact that fiji had fewer rops than the 980ti, if it had 96 like the ti then most likely it would have been on par if not faster from day one.

The power saving is the whole friggin point. The FuryX series save something like 30w from using HBM. The FuryX already had worse performance per watt than the 980Ti, throw ont he 30w saved form HBM and there is a monumental difference, which is what we see with this next generation.

The RX480 has now reached the same performance per watt as the 2 year old 970 which was built on a larger 28nm process.
 
The power saving is the whole friggin point. The FuryX series save something like 30w from using HBM. The FuryX already had worse performance per watt than the 980Ti, throw ont he 30w saved form HBM and there is a monumental difference, which is what we see with this next generation.

And would anyone have cared if it had an extra 30 watts to use to be faster? I doubt it, only the point scoring brigade. And you're wrong about power being the whole point as reduction in pcb space is obviously one of the other plus points.
 
They can't do a card with GDDR5 to match the 1070/80, as it would need the new Nuclear plant coming at Hinkley Point to power it, and it would be as hot as the bloody sun, thats why they put HBM on the Furys, to keep the power/heat down, if they'd used GDDR5, no one would have looked twice at em (which they didn't anyway tbh), as they'd have sucked an enormous amount of power, and been as hot as the bloody sun too compared to Nvidias cards, which can be run on a King Edward, and are a helluva lot cooler.
 
Last edited:
They can't do a card with GDDR5 to match the 1070/80, as it would need the new Nuclear plant coming at Hinkley Point to power it, and it would be as hot as the bloody sun, thats why they put HBM on the Furys, to keep the power/heat down, if they'd used GDDR5, no one would have looked twice at em (which they didn't anyway tbh), as they'd have sucked an enormous amount of power, and been as hot as the bloody sun too compared.

As said above, 30 watts extra is nothing, and considering the card is water-cooled by default i'm failing to see where this "impossibility" in cooling comes in.

As for nvidia cards being cooler, reference to reference which is cooler, furyx or 980 980ti? Even the regular air-cooled fury is a good 10c cooler than the 980 or 980 ti.
 
Status
Not open for further replies.
Back
Top Bottom