• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Also stuff like higher level APIs will create duplicates of graphics data when some functions are called - at API level the programmer has no access to them to unload and the API keeps them around for compatibility or similar reasons only unloading when the program is done while at driver level you could see that and move them to system memory... but then you might get a clever programmer that uses direct memory access or some other non-standard/thinking outside the box trick to make non-standard use of that data causing issues if the data isn't in VRAM :s
 
Regardless of performance (to an extent), if the smaller Vega is £350+, then AMD won't be doing so well. Reason why the 970/390 sold well and why the RX480/1060 are currently best selling is due to that good price point. Above the £300 mark, interest drops off rather steep, since obviously a lot less people have that much to spend on a GPU.

And a £350+ price point is highly likely given current RX 480 pricing. The 480s encompass the £200-300 price bracket, under £350 would be far too close and may damage the sales of both small Vega and 480. I don't see AMD/retailers doing price-drops on the 480s either to compensate.

If bigger Vega is akin to a 1080, £550 or below should do well. But as many have said, Linus (from LTT) included, AMD only just catching up to Nvidia with Vega a year later is honestly no good for them. They need to be targeting Titan XP and to be fair, that's what the FuryX at least tried to do. Though it did end up competing with the 980ti due it's sneaky release and still lost due to bad stocks and not doing quite so well at lower resolutions.

Fingers crossed big Vega (or the 1080ti at that) can consistently hit 4k60 in non-Vulkan games and scales down appropriately to 1080p (etc) or I'm waiting another year for a GPU...

Nonsense, the RX 480 8GB is £220 to £270 with most at £250, the GTX 1070 is £380 to £420 on average, if a GTX 1070'ish AMD Vega is £350+ they ain't selling any, its as simple as that.

So they will be around the £330 to £380 mark, which is £100 more than the 8GB RX 480.
 
Agree that 4Gb is 4Gb, but imagine there's more to vram than most of us speculate.

@Roff

The 24Gb Tesla has more than double the min frame rate over a 12Gb TXP in Hitman, is the TXP's 12Gb vram affecting it in comparison@4K.:p

Even AMD and AdoredTV said that Doom is best case scenario for VEGA performance and GTX 1080 is matching the performance on the same map.

He also said (roughly)'ID probably haven't enabled Async and Shader Intrinsics yet, that's another boost, so it'll be faster than 1080=TXP performance(in Doom)' if(I imagine AMD pay/already paid) ID to enable it@launch.

I like that accent ^^^^ :) its way better than mine.

That's Train Spotting's Spud and Begbie territory, he's pretty well spoken for an East coaster, betting Adored is more educated than most of us.
 
if a GTX 1070'ish AMD Vega is £350+ they ain't selling any, its as simple as that.

+1

Too late in the game, it needs to be a good bit cheaper imo if they want to sell outwith the AMD users that won't touch Nv with a bargepole, Nv will reduce pricing too but not much, the 780 was still £50+ above the 290 when it released after Nv price cut, so AMD will have to factor in Nv price cuts too.

AMD need to price Vega competitvely, everyones saying that no one will buy them but the people that refuse to pay north of £300 will jump all over it and I can assure you there is more of them waiting than those that have already paid north of the £350 pp for a 1070, way way more.
 
Last edited:
The 24Gb Tesla has more than double the min frame rate over a 12Gb TXP in Hitman, is the TXP's 12Gb vram affecting it in comparison@4K.:p

Bit OT, but been meaning to say it for a while; that benchmark is so bogus. For one, the TXP's minimums are apparently lower than both the 1080 and Fury X, and going by their numbers my card is ~10% faster than the P6000.
 
+1

Too late in the game, it needs to be a good bit cheaper imo if they want to sell outwith the AMD users that won't touch Nv with a bargepole, Nv will reduce pricing too but not much, the 780 was still £50+ above the 290 when it released, so AMD will have to factor in price cuts too.

Right, like many i'm not looking for an E-Peen signature item, not that there is anything wrong with that, i'm just looking for a significantly faster card than my GTX 970, like GTX 980TI sort of performance for <£350.

If AMD can't provide that in the first half of 2017 they might as well pack up and give up because that is nothing special today.

They will, because they ain't packing up yet.
 
Fury and Vega are different cards.

Also stop to think what the Fury cards would have been like if they had regular GDDR5. The power consumption would have been way higher, the PCB would have been much larger, etc etc.

In case I'm not being clear, take a Titan XP and put 12GB HBM or HBM2 on it, it will run faster, consume less power and produce less heat.

Just because Fiji (Fury) cards were absolutely terrible doesn't mean HBM is.

Well if the Fury had regular GDDR5 it would have been faster at lower resolutions but faring not as well at 4K. As there's a bigger proportion of users gaming below 4K it might have been more successful.

PCB larger yes but power consumption MUCH higher no, memory doesn't use massive amounts of power.
Heat wouldn't be as much of an issue, not concentrated around the core with HBM so an AIO wouldn't be required reducing the overall cost. They could have had a cheaper & faster card that would've appealed to more gamers.

How can you say the Titan XP would run faster with HBM - it could be slower at lower resolutions. Also there's the extra localised heat to contend with at much higher clock speeds than the Fury.
 
Well if the Fury had regular GDDR5 it would have been faster at lower resolutions but faring not as well at 4K. As there's a bigger proportion of users gaming below 4K it might have been more successful.

PCB larger yes but power consumption MUCH higher no, memory doesn't use massive amounts of power.
Heat wouldn't be as much of an issue, not concentrated around the core with HBM so an AIO wouldn't be required reducing the overall cost. They could have had a cheaper & faster card that would've appealed to more gamers.

How can you say the Titan XP would run faster with HBM - it could be slower at lower resolutions. Also there's the extra localised heat to contend with at much higher clock speeds than the Fury.

The Fury would NOT have been faster at lower resolutions. Bandwidth is bandwidth, 512GB/s will load 5GB/s faster than a card that has 320GB/s of total bandwidth. It's a speed, it means any and all memory access is faster.

Power, you don't know what you're talking about. Memory using a LOT of power, not just the chips, in fact the chips use less than the IO, sending signals to and from memory which accounts for about 2/3rd's of the power usage and that is the biggest power saving because the trace lengths are a tiny fraction of that for GDDR5, that is the entire reason for them to be on die. 512GB/s in GDDR5 would use 85W compared to 28W for HBM. There is a HUGE power difference.

The simple fact is that GDDR5 wouldn't make it faster at lower speeds, it just wouldn't. There are many other limitation within Fury, it was never designed for HBM or that much bandwidth, it could only use about 360 or 380GB/s internally which is most likely what limits it more than anything else though the front end certainly also did. GDDR5 with huge bandwidth wouldn't change those limitations as they were inherent to the architecture.

However that 50W extra the GDDR5 would have used as well as a much larger memory controller would have meant at least at 250W, it would have had to have more like 3200 shaders max, or been a 300W card.

AS for heat concentrated around the core, another random myth people came up with at the time. Again 2/3rd of that 85W used for GDDR5 to produce that level of bandwidth is generated on the memory controller directly on the die. IN fact because the memory controller has parts running dramatically slower, it likely reduced heat significantly being HBM. The memory controller is likely to be one of the hottest spots on the GPU die.

Fury X didn't NEED a watercooler, look at Nano or Fury, they ran great temp wise. The AIO was because by doing so you had basically the quietest reference high end card since forever and it ran stupidly cool as well. You can put an AIO on basically any high end gpu and have a better cooled and quieter card. There was never any need or requirement for an AIO cooler, it just made it incredibly quiet.
 
Last edited:
Power, you don't know what you're talking about. Memory using a LOT of power, not just the chips, in fact the chips use less than the IO, sending signals to and from memory which accounts for about 2/3rd's of the power usage and that is the biggest power saving because the trace lengths are a tiny fraction of that for GDDR5, that is the entire reason for them to be on die. 512GB/s in GDDR5 would use 85W compared to 28W for HBM. There is a HUGE power difference.
...
However that 50W extra the GDDR5 would have used as well as a much larger memory controller would have meant at least at 250W, it would have had to have more like 3200 shaders max, or been a 300W card.

I understand the 1080 consumes about 180W, right? Does the include the memory subsystem or is it just the chip? Because since it using GDDR5, it should have a 50W handicap right there and I don't think it does. If Vega (after all these efficiency improvements) is over 180W with HBM, something's gone very wrong...

EDIT: http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1080-pascal,4572-10.html
That source says 173W total in gaming and 206W with OC.
 
Why was it's Achilles Heel lower resolutions then? Also the AIO wasn't "quiet", plenty of posts on here would beg to differ.....

Boohoo, some people get loud/broken samples, doesn't change basic facts. Look up any review, the Fury X was significantly quieter than every other reference card available, simple as that. If it required an AIO to be cooled well, it would have run at 85-90C with the AIO thus showing it would run above that with an air cooler. Running at 50C load with an AIO and 65-70C on a Fury non X with air, doesn't denote the need for watercooling/AIO. Watercooling/AIO is an option that AMD chose to put on the Fury X nothing more or less.

https://www.techpowerup.com/reviews/AMD/R9_Fury_X/30.html

32dba loud compared to 42dba for a 980ti. 10dba equates to just over 2 times the noise iirc, 10 times the noise power but 2 and a bit times the perceived noise level.

http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/25

43dba vs 51dba for the 980ti (keep in mind different noise testing equipment, different rooms tested in and different distances from the card tested. The key is consistency between tests on the same site and difference between the cards in any giving testing situation. Saying the card is loud is simply ignoring all evidence. People got dodgy pumps due to a production issue on MSI (IIRC) pumps but afaik was fixed and replacements were available for those who wanted them. The card itself is significantly quieter while running 20-30C cooler than the 980ti depending on the review.

As for what it's achilles heel was... it's almost as if in the post you replied to I already pointed it out.

What makes you think memory gets less fast if you use it less? It's a nonsensical, illogical concept that 512GB/s is slower than less bandwidth if you draw less memory per frame.

If you're accessung 8GB each time the GPU draws a frame(the gpu accesses the memory repeatedly for each frame) at 4k, then you can draw that data 64 times a second at 512GB/s, if you are accessing 4GB each time a frame is drawn, you can do it 128 times a second. But some parts of the GPU can keep up at 64 frames per second but can't at 128 frames. Like you know... always. There is ALWAYS a limit within the gpu every generation. Bandwidth can be a limit, too much really can't be a 'limit', just inefficient to have more than you need. So if the front end can only feed the back end 64 times a second, but the back end can do the work it needs to do 128 times a second, the front end is the limit. If it's the other way around the front end can feed the back end but the back end can't keep up, that is the limit.

Fury X is shader heavy with what it seems to be not enough throughput to keep it fed at lower resolution. It's not filling up the shaders as heavily. As you up the resolution the front end is doing what it did before and the back end can keep up fine.

Only a couple of people keep blaming the HBM, when every other generation it becomes fairly obvious what the limitation is.

For instance I see in Pascal Titan X reviews that it might only be 10-15% ahead at 1080p but pretty normally 25% ahead at 4k and in games in which with lower settings the CPU can push out higher frame rate at 1080p easily. So Titan X shows to be fairly significantly limited at 1080p compared to 4k.... which is obviously down to the HBM.

I've seen one person on here who took it upon himself to slate HBM everywhere he could for everything under the sun, yet ignore that there is a limitation within every past Nvidia and AMD card that I can think of. Every single time without fail that limitation is too little, too little bandwidth, too little memory, too few rops, too few shaders, too few TMUs. Never once too many of anything, particularly when in providing too much bandwidth(more than Fury X could use) it saved power and die size and gave them valuable experience in producing interposer/hbm chips before NEEDING HBM in future chips.
 
Last edited:
Nonsense, the RX 480 8GB is £220 to £270 with most at £250, the GTX 1070 is £380 to £420 on average, if a GTX 1070'ish AMD Vega is £350+ they ain't selling any, its as simple as that.

So they will be around the £330 to £380 mark, which is £100 more than the 8GB RX 480.

I bought a 480 last year for £210 lol. IMO, £350+ small Vega and £330-380 Small Vega are the same. £20 barely makes a dent in the £350, something more akin to £300 would be nicer. But my cynicism and realistic thoughts push me to think that AMD will make a blunder on pricing and do £350 at least. It's just a prediction and I really hope I'm wrong (and that it falls way under that).
 
I think adoredTV has gotten a bit cynical over the last few weeks. im not saying he is wrong but personally it shines through that something has affected his opinion of how good vega is going to be.

Anyway, my own performance predictions are as follows, Small vega will sit dead in the middle of 1080 and TitanXP with driver improvement over the first year pushing it closer to the TXP and "big" vega is going to be slightly faster than TXP with the same sort of improvement from drivers within the first year. I also think they are going to do a "polaris" style launch again since it seemed to have worked for them the last time, meaning pricing is going to be attractive unless we get *****ed again by EU import fees or retailer price gouging again.

I think AMD is done for now with going for the fastest of the fastest and instead are more interested in moving major numbers as they already know its going to take some time if they want to beat nvidias mindshare(or atleast be able to compete with it) as you dont do that with red numbers on bottom.

Personally looking forward to Vega. AMD needs something faster for those who dont want nvidia for whatever reason and we are going to get that and hopefully the price will not be aweful.
 
Well, I haven't seen a single comparison of P100 vs Pascal Titan, but lets not forget that both of these cards are 250W. The P100 has an extra 1792 FP64 cores which you know, use power, that is just about half the number of cuda cores over again. Titan X boost clocks which afaik are hit pretty often is 1530Mhz, the P100 boost clock is only 1300Mhz, leaving the Titan X 18% faster ignoring everything else. But it's also extremely unlikely that with all the extra power drain that a P100 will maintain it's boost clock as well as the Titan X does, but again even if it does the Titan X is 18% higher clocked.

You obviously have not bothered to read NVidia's own figures for the cards.

As to Pascal Titan X boost mine are 1860mhz @stock and over 2100mhz overclocked. That is a lot of ground for a low clocking professional card to make up.

I am not going to keep posting on this subject in this thread but I am looking forward to testing the Vega cards to get some solid practical results, I much prefer doing that to quoting theory without any hands on experience.

Practical trumps theory every time.
 
I think adoredTV has gotten a bit cynical over the last few weeks. im not saying he is wrong but personally it shines through that something has affected his opinion of how good vega is going to be.

I think his predictions for the 480 were way off the mark, if you watched his videos leading up to it, he was speculating it was going to be this and that and he was wrong.

I like him, but a lot of his speculation is just that, speculation, none of it is really based on anything other than rumors and armchair maths.

The point is, no one but AMD know how Vega currently performs, i would imagine at this stage it changes daily as well.

I would like to see AMD drop in Vega in 2 units at first, one between 1070 and 1080 and priced close to the 1070, and one between 1080 and TitanXP, priced closer to the 1080. Obviously performance has to be there to make them sit in those slots.

I have a 1070 currently, but i am waiting for Vega to go back to Freesync, it only needs to be as fast as a 1070 and priced correctly for me to switch, i know my cousin also has a 1070 and is waiting to switch, and i know of another person with a 290 waiting on Vega to upgrade. There are plenty of people with money ready to make the move, AMD just need to get the performance and price correct.

I believe 1070 is actually a good spot performance wise right now if your a 1080/1440p gamer, 1080 is perfect for 1440p pushing to 4k. If AMD demo'd vega at 4K in Doom that is probably their 1080 competing card.

As always everything comes down to price however, if the performance is there, they can be close to Nvidia on prices, if it is not there, they need to be aggressive on prices to get people to switch.

However if performance is not there, i worry for their GPU division, it is all well and good investing in future tech, but the here and now demands here and now performance.
 
I think his predictions for the 480 were way off the mark, if you watched his videos leading up to it, he was speculating it was going to be this and that and he was wrong.

I like him, but a lot of his speculation is just that, speculation, none of it is really based on anything other than rumors and armchair maths.

The point is, no one but AMD know how Vega currently performs, i would imagine at this stage it changes daily as well.

I would like to see AMD drop in Vega in 2 units at first, one between 1070 and 1080 and priced close to the 1070, and one between 1080 and TitanXP, priced closer to the 1080. Obviously performance has to be there to make them sit in those slots.

I have a 1070 currently, but i am waiting for Vega to go back to Freesync, it only needs to be as fast as a 1070 and priced correctly for me to switch, i know my cousin also has a 1070 and is waiting to switch, and i know of another person with a 290 waiting on Vega to upgrade. There are plenty of people with money ready to make the move, AMD just need to get the performance and price correct.

I believe 1070 is actually a good spot performance wise right now if your a 1080/1440p gamer, 1080 is perfect for 1440p pushing to 4k. If AMD demo'd vega at 4K in Doom that is probably their 1080 competing card.

As always everything comes down to price however, if the performance is there, they can be close to Nvidia on prices, if it is not there, they need to be aggressive on prices to get people to switch.

However if performance is not there, i worry for their GPU division, it is all well and good investing in future tech, but the here and now demands here and now performance.

It is pointless guessing what Vega performance will be. Just wait for the launch benchmarks and keep an open mind about everything until then.

The Polaris cards were very good and based on that I think Vega will be too but as to performance only the guys at AMD will know.:)
 
It is pointless guessing what Vega performance will be. Just wait for the launch benchmarks and keep an open mind about everything until then.

The Polaris cards were very good and based on that I think Vega will be too but as to performance only the guys at AMD will know.:)

Yup didn't even seem like Raja himself knew how it would perform in most games; think AMD's focus is elsewhere (the big deal with Google etc. has probably got them thinking about AI more than anything else).
 
Status
Not open for further replies.
Back
Top Bottom