• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Or they could have gone down the NVidia route and used HBM2 on their professional cards and GDDR5X on their gaming cards.

It is a bad mistake using expensive memory on a card that does not need it.

Going down the Nvidia route is more expensive from an R&D perspective though, they'd have two options:

A) Develop a new 'dual' memory controller, easiest on the die side as one die can serve both professional and gaming markets, but extra development time/effort. Also 'wasted' die area on both variants.

B) Seperate dies, still requires actually making a 'new' GDDR5X controller (admittedly going to be a modified GDDR5 one they already have, but still effort/verification required) plus the overhead of developing, verifying and producing 2 dies rather than 1.

From an R&D side both of those are far more expensive than going all HBM2, assuming the lower end cards don't 'need' it they can reduce stacks (as they appear to have done) to minimise costs but it's then a BOM or per-card profit margin limitation than an outright cost.

Nvidia have a much bigger budget so both developing two 'new' memory controllers and producing 2 dies is the easy decision.
 
Going down the Nvidia route is more expensive from an R&D perspective though, they'd have two options:

A) Develop a new 'dual' memory controller, easiest on the die side as one die can serve both professional and gaming markets, but extra development time/effort. Also 'wasted' die area on both variants.

B) Seperate dies, still requires actually making a 'new' GDDR5X controller (admittedly going to be a modified GDDR5 one they already have, but still effort/verification required) plus the overhead of developing, verifying and producing 2 dies rather than 1.

From an R&D side both of those are far more expensive than going all HBM2, assuming the lower end cards don't 'need' it they can reduce stacks (as they appear to have done) to minimise costs but it's then a BOM or per-card profit margin limitation than an outright cost.

Nvidia have a much bigger budget so both developing two 'new' memory controllers and producing 2 dies is the easy decision.

The size of NVidia's budget does not matter, it is down to how many cards they expect to sell.

AMD by not going the NVidia route are basically saying they don't expect to sell as many cards as NVidia will.
 
Lol it isn't Doom that is optimised for Gcn but Vulkan which is api .... Btw from the looks of it this one will sit between 1070 and 1080
This isn't really true. Vulkan isn't inherently better on AMD hardware at all or 'optimized for GCN'. Just the implementation used in Doom.

DX12 and Vulkan are only what the developers make of it. You could easily optimize a Vulkan-led game to run way better on Nvidia hardware if you really wanted to.

The two main things benefiting AMD with DX12/Vulkan at the moment are

1) Lack of driver overhead. Nvidia has better DX11/OpenGL drivers, so if you largely remove that advantage, AMD cards do better comparatively. Nvidia's OpenGL drivers were particularly far ahead of AMD, which is the main reason why AMD cards get such a big boost with Vulkan in Doom. It is no longer hampered by its poor OpenGL support.

2) Async compute shaders. This is something Nvidia cannot do as well as GCN hardware, so any titles that use this(and to be clear, this is something the devs have to implement), will see a greater boost on GCN than Nvidia.
 
Last edited:
This isn't really true. Vulkan isn't inherently better on AMD hardware at all or 'optimized for GCN'. Just the implementation used in Doom.

DX12 and Vulkan are only what the developers make of it. You could easily optimize a Vulkan-led game to run way better on Nvidia hardware if you really wanted to.

Only if you deliberately enable feature sets on Nvidia but not on AMD.
 
Only if you deliberately enable feature sets on Nvidia but not on AMD.
You can absolutely tailor low level optimizations to a fixed(or similar) hardware architecture in order to get superior performance. This is the entire idea behind console optimization. You just wouldn't want to do this on PC, but it is definitely possible.
 
You can absolutely tailor low level optimizations to a fixed(or similar) hardware architecture in order to get superior performance. This is the entire idea behind console optimization. You just wouldn't want to do this on PC, but it is definitely possible.


Which only works for one architecture at a time, so it would be a case of coding for AMD or Nvidia, not both.
 
Nvidia busy polishing the 1080Ti................

Hope AMD come though this time.
Thing is, any 1080Ti is likely to be slower than a TitanX. There's not really anywhere to go 'up' for Nvidia aside from releasing a prosumer GP100, which wont happen.

If AMD can beat the GTX1080 with big Vega, Nvidia will NOT be able to respond with something more powerful. Only cheaper.
 
AMD shows Vega Cube with 100 TFLOPs

amd-vegacube-1.jpg


http://www.fudzilla.com/news/graphics/42362-amd-shows-vega-cube-with-100-tflops
 
Dear fellow it surely does help to read, they used ultra+ tsaa

Even so, that doesn't disrupt the point that Doom is heavily optimised/favoured for AMD, which means the performance in other things won't be quite as impressive.

If they can do 1080 performance for less than £400 then it would be interesting to what nvidia would do with there current pricing..Same goes with Intel's 8c 16t cpus against ZEN's equivalent
+1 GTX 1070 - 1080 £350 thats me happy :)

That's a big if. It would be almost halfing the price-to-performance of the 1080... I doubt AMD is sitting on such a big miracle. Humbug's suggestion seems more likely but still sounds too good to be true (putting the 1070 in its place).

BEST case scenario they undercut Nvidia's pricing severely... surely?
or at least reverse the ridiculous trend of 2016.

That would be the best case scenario. Trends before 2016 were that AMD slot in around where Nvidia had their GPUs, after the 290(x) at least and even that needed a price drop from AMD due to 970. I just assume the worst that AMD will do that again and slot around the 1070/1080... but then again there's that awkward gap between the RX 480 and 1070.

I am going to fall off my chair laughing if Vega + HBM2 can not thrash a Pascal Titan.

Or putting it another way if AMD are going to use very expensive memory, the card had better come up with the performance.

And that's just it... Vega sounds to me like the Fury all over again. Remember how folks were predicting that one to beat the Titan of that time? I can still remember the discussion thread (exactly like this one)... people arguing that the HBM would magically give it powers to beat a Titan and that it should be called AMD Zeus or the Titan-killer... and then what did we get?

I for one am pleased with the performance of the M25 and hopefully the desktop card will clock much higher and give some great returns. 8GB is plenty as well for games.

Well the M25 sucks... I never use it on the way home cos the traffic is so bad... how does it help AMD XD
8GB really is enough for most games.

I'm just looking for a card that can do 4k60 on max settings (minus the AA) on most games. Titan XP was close but no cigar, so it's Vega (if it beats the Titan like many people are hoping) or 1080ti. Doesn't matter who releases the card... as long as it doesn't cost the earth and it does the job, I'm buying that card. I would like it to be AMD though... this FreeSync monitor could use a compatible card.
 
Which only works for one architecture at a time, so it would be a case of coding for AMD or Nvidia, not both.
You can have code that is simply FAR more optimized for a targeted architecture that still technically works on other hardware. Obviously this is taking the argument to the extreme, but the point is that low level optimization can absolutely be swung in the direction of Nvidia if a developer wanted to do so.
 
You can have code that is simply FAR more optimized for a targeted architecture that still technically works on other hardware. Obviously this is taking the argument to the extreme, but the point is that low level optimization can absolutely be swung in the direction of Nvidia if a developer wanted to do so.

Or in AMD's direction.



Resistance is futile :D















I'll get my coat.....
 
Thing is, any 1080Ti is likely to be slower than a TitanX. There's not really anywhere to go 'up' for Nvidia aside from releasing a prosumer GP100, which wont happen.

If AMD can beat the GTX1080 with big Vega, Nvidia will NOT be able to respond with something more powerful. Only cheaper.


If NVidia really wanted to, they could release a full fat GP102 with Balls to the wall clock speed, and an unhealthly amount of volts going through it, give it a water cooler and then sit back and bask in the glory of having a absolute monster of a card that they could sell for an ungodly amount. :p

I seriously doubt they would do that though, but they could if they really wanted too.
 
Thing is, any 1080Ti is likely to be slower than a TitanX. There's not really anywhere to go 'up' for Nvidia aside from releasing a prosumer GP100, which wont happen.

If AMD can beat the GTX1080 with big Vega, Nvidia will NOT be able to respond with something more powerful. Only cheaper.

The gtx1080 is at an inflated price anyway, and gp104 is a clever execution as it can be sold in mobile or desktop. The mk1 titan was die salvaged, then when hawaii was strong, they released the titan black (the full gk110)

They could sell a real titan x 30sm 3840shader gp102, call it black girther and charge what they like.
 
Or in AMD's direction.
Of course. I think you're missing the point of what I was saying, though. That low level API's like DX12/Vulkan aren't 'optimized' for any specific manufacturer or architecture. Meaning yes, it is actually Doom's specific case that it, with Vulkan, gets such a big boost with AMD cards whereas Nvidia benefits little.

The gtx1080 is at an inflated price anyway, and gp104 is a clever execution as it can be sold in mobile or desktop. The mk1 titan was die salvaged, then when hawaii was strong, they released the titan black (the full gk110)

They could sell a real titan x 30sm 3840shader gp102, call it black girther and charge what they like.
I still dont buy this whole, "They can charge what they like and people will pay it" idea. I really doubt Titan X at £1000 sold all that much. Or maybe there was such limited stock that didn't matter anyways and it was largely just a brand boosting product more than anything. When I see people talking about what the 'high end' card to buy is, usually the talk is about the GTX1080.

Also, it's still somewhat early in the new node's lifetime. Would probably be too expensive to release binned GP102's at any kind of an acceptable price point.
 
Status
Not open for further replies.
Back
Top Bottom