• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
It would be funny to see AMD throw caution to the wind and release a no holes barred behemoth of a GPU with 16GB of HBM2 and x2 Vega chips on the same card acting as one with all the compatibility handled at the hardware lvl.... Their own Titan in effect not playing for value but just outright performance that we'd have to pay for.

That would disrupt the high-end market and we'd be in for an expensive few years. Competition can be performance driven too :)
"No holes barred" eh? I would link to urban dictionary for those who do not know what that means, but I think i would get suspended. Lol :p
 
"No holes barred" eh? I would link to urban dictionary for those who do not know what that means, but I think i would get suspended. Lol :p

Yeah definitely one you don't want to slip up on the d on.

and x2 Vega chips on the same card acting as one with all the compatibility handled at the hardware lvl....

I don't think even with 2 cores on the same interposer that is even possible - with the way GPUs work just way too big a latency penalty when data needs to be shuffled around.
 
Last edited:
I don't think even with 2 cores on the same interposer that is even possible - with the way GPUs work just way too big a latency penalty when data needs to be shuffled around.

If they're on the same interposer wouldn't it be possible (in theory) to give them the same latency & bandwidth between each other as high frequency HBM memory?

I wonder how much bandwidth and latency is needed to make 2 cores play very nice with each other. Even if you incurred a 10% loss of theoretical performance, if it was consistently only 10% it'd still be great.
 
Not just the memory bandwidth/latency but stuff like caches, etc. if you were running 2 operations on separate cores and then needed the results for the next operation (which you won't necessarily know in advance so limits of what you can do with marshalling, etc.) that is going to be a nightmare not to have massive penalties and cache misses, etc. and transporting the results around from core to core as needed.
 
Sounds like a bloatware beast. AMD if smart will make a chip that can scale through the whole range, isnt this where they make most money. Scales of economy from mass production to the simple peasant hordes, 2 cores would be a concept piece maybe. Any top end card will have workings related to their APU ?

http://hexus.net/tech/news/cpu/82372-details-amd-zen-16-core-x86-apu-emerge/
 
With HBM2 the memory bandwidth is dictated by the amount of memory stacks?
So if we have a 8GB and 16GB version of the same card is it likely that the 16GB would have an innate higher performance with every other factor the same?
Or like with PCIe lanes we would struggle to see any real world applications stressing it?
 
Lets be blunt here - 60-70% of the time the conversation has become largely about nVidia because someone had, even if part of a post about AMD, a dig or dis at nVidia in their post. If people want to keep this thread purely AMD then well it starts at home.

You're quite right, Every conversation has at least 2 sides conversing and I wasn't talking to one in particular, it was aimed at all sides in the off topic stuff. I was simply agreeing that it would be nice to have on topic only chit chat in certain threads, this being one of them. Instead we have 312 pages for something we know very little about.
 
Wasn't a dig at you btw - just some that come in here moaning about it being an nVidia thread and half the time you can find them bad mouthing nVidia 2 pages back or something lol.
 
It would be funny to see AMD throw caution to the wind and release a no holes barred behemoth of a GPU with 16GB of HBM2 and x2 Vega chips on the same card acting as one with all the compatibility handled at the hardware lvl.... Their own Titan in effect not playing for value but just outright performance that we'd have to pay for.

That would disrupt the high-end market and we'd be in for an expensive few years. Competition can be performance driven too :)

I'd bite, At least with AMD we can be fairly certain that when we buy a newly released Fury type card it's going to remain at the top of AMD's stack for years. :D

Wasn't a dig at you btw - just some that come in here moaning about it being an nVidia thread and half the time you can find them bad mouthing nVidia 2 pages back or something lol.
It was a fair comment :)
 
Star Citizen ditch DirectX in favour of vulkan exclusive.
i hope this doesn't delay the game even more, few months back they also changed the graphic engine from cryengine to amazon's lumberyard ( still cryengine based but...)
unrelated topic but i didn't want to start a new thread.
l07Qoxt.jpg
Source : reddit
 
Star Citizen ditch DirectX in favour of vulkan exclusive.
i hope this doesn't delay the game even more, few months back they also changed the graphic engine from cryengine to amazon's lumberyard ( still cryengine based but...)
unrelated topic but i didn't want to start a new thread.
l07Qoxt.jpg
Source : reddit

Ha. Looks like their Mantle code going to be used........
 
Hopefully more devs will see sense and start using Vulkan. DX12 was always a reaction to Mantle but never lived up to the hype. I have yet to see a DX12 equivalent of Doom Vulkan.
 
Hopefully more devs will see sense and start using Vulkan. DX12 was always a reaction to Mantle but never lived up to the hype. I have yet to see a DX12 equivalent of Doom Vulkan.

Nope. I found that even titles like TW Warhammer and The Division work much better on the Nano in DX11 than DX12.
Yes the TWW benchmark only cuts 1fps from my DX11 perf with the Nano, but on my GTX1080 was dropping a whooping 20%!!!!!
(Kaap had same issues that can be found on the TWW benchmark thread).

However the performance was worse when playing the game by 20% on the Nano on the campaign map.

On the Division the difference is less obvious but there also.
And makes no sense :/

So Vulcan is the only way forward as Mantle was. But NV didn't want to support it. Something they do at least with Vulcan.
And lets not forget, with Vulcan we aren't limited to MS spyware OS.
 
Nope. I found that even titles like TW Warhammer and The Division work much better on the Nano in DX11 than DX12.
Yes the TWW benchmark only cuts 1fps from my DX11 perf with the Nano, but on my GTX1080 was dropping a whooping 20%!!!!!
(Kaap had same issues that can be found on the TWW benchmark thread).

However the performance was worse when playing the game by 20% on the Nano on the campaign map.

On the Division the difference is less obvious but there also.
And makes no sense :/

So Vulcan is the only way forward as Mantle was. But NV didn't want to support it. Something they do at least with Vulcan.
And lets not forget, with Vulcan we aren't limited to MS spyware OS.

I find that dx12 gave me a much better experience in the division. Higher fps and smoother due to more consistent. In Bf1 i do get better fps but it's more choppy. Been months since i played it though so could have changed. So dx12 is mixed for me at this moment. Some go on about dx12 not improving visuals but in the Division enabling dx12 definitely let me play with higher visuals. It's definitely an improvement but the work needs to be put in for us to benefit and Nvidia need to update there hardware so Dev's will take more notice. AMD need to gain more market share as well to force them if Nvidia won't. It's a shame that the market leader are behind in this way.
 
Sounds like a bloatware beast. AMD if smart will make a chip that can scale through the whole range, isnt this where they make most money. Scales of economy from mass production to the simple peasant hordes, 2 cores would be a concept piece maybe. Any top end card will have workings related to their APU ?

http://hexus.net/tech/news/cpu/82372-details-amd-zen-16-core-x86-apu-emerge/

Ok this thing, if real, needs to go in a console. Perfect candidate for 'true next gen'. They could probably be fine making it 8c/16t too, rather than 16/32, since Zen is massively more powerful per thread than the current Jaguar cores they use.


Star Citizen ditch DirectX in favour of vulkan exclusive.
i hope this doesn't delay the game even more, few months back they also changed the graphic engine from cryengine to amazon's lumberyard ( still cryengine based but...)
unrelated topic but i didn't want to start a new thread.
l07Qoxt.jpg
Source : reddit

And this is great news.

Makes a ton of sense for cross-platform, and also Vulkan appears to be better than DX12 at the moment. Both in performance and stability.

Also if all the consoles start using Vulkan (phones do, and the PS4 uses/used OpenGL so I assume they're switching too), then pretty much everything would run it.
 
Last edited:
Hopefully more devs will see sense and start using Vulkan. DX12 was always a reaction to Mantle but never lived up to the hype. I have yet to see a DX12 equivalent of Doom Vulkan.

While only in a general sense Vulkan will be much more familiar to developers with Open GL experience (i.e. id software) while DX12 is quite a mind shift from DX11, etc. hence those small number of game studios still holding onto Open GL will likely move over to Vulkan more than typical for studios more invested in DX IMO.
 
Status
Not open for further replies.
Back
Top Bottom