• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Aye, it is rather odd to say the least; all they've shown and mentioned is FP16 use cases and performance.
If they don't have the FP64 performance to compare with Tesla, it might mean they try and compete on price and FP16 alone.


Could be going after deep learning community with much lower costs than Tesla Gp100. Google did have an agreement with AMD for datacenter machine learning GPUs but that seems short lived since Google is veyr happy with the TensorFlow hardware which is much cheaper, faster and lower power than GPUs.
 
Could be going after deep learning community with much lower costs than Tesla Gp100. Google did have an agreement with AMD for datacenter machine learning GPUs but that seems short lived since Google is veyr happy with the TensorFlow hardware which is much cheaper, faster and lower power than GPUs.

The TPU (Tensor Processing Unit) Google created is optimised to run established neural networks, but not train them.

My assumption is Google intend to train new networks on supercomputers using Vega GPUs, then run them on their specifically optimised TPUs
 
no one can know how big of a difference it makes yet though. It sounds to me like it's hugely different and capable of being a massive improvement. We will just have to wait and see if it can actually show that.

True, and i think some people get to hung up on the differences between an iteration/evolution and brand new architecture. Brand new architecture is not necessarily a good thing. Bulldozer was brand new ground up. 2900xt was brand new ground up design. FX5800Ultra wa brand new ground up design. Hawaii was an iteration of GCN and very good, etc.


Vega does look like the most significant iteration of the GCN architecture so could see some dramatic improvements within certain bottle-neck scenarios. the difficulty is removing 1 bottle-neck just shifts the limitation to the next constraint so it can take a lot of small changes to realize big performance changes.
 
The TPU (Tensor Processing Unit) Google created is optimised to run established neural networks, but not train them.

My assumption is Google intend to train new networks on supercomputers using Vega GPUs, then run them on their specifically optimised TPUs

At the current stage yes, but Google plan is to be training the networks on future TPUs. Training and using a neural network is actual very similar. there is additional math involved working out the backprogation error derivatives but the TPU should be able to do that. the biggest differences is related to the large amount of data for training. A Gpu with a lot of high bandwidth memory is needed which the TPU lacks. In the future Googles could create TPUs with more memory, potentially HBM2, and possibly some additional compute performance for training. The current TPU is designed to also work inside autonomous cars for example as well as in Googles datafarm so its very low powered.
 
I'm getting excited! Fury X at 1500 Mhz would already be a nice upgrade from my current GTX 980 Ti, so a 50% clockboost + other improvements should make it a decent upgrade over any gtx 1080/gtx 1070/GTX 980 Ti/etc. Probably going to trail the GTX 1080 Ti in most benchmarks, but that doesn't really matter as none of nvidia's cards support freesync nor hdmi 2.1 (please amd don't leave hdmi 2.1 out of the specs!).
 
no one can know how big of a difference it makes yet though. It sounds to me like it's hugely different and capable of being a massive improvement. We will just have to wait and see if it can actually show that.
I dont think people are arguing how big a difference it'll make, just saying the general architecture of the cards is still GCN. NCU is just a new iteration of the compute engines, which from what I understand, the biggest improvement seems to be the ability to split FP workloads more efficiently, which is quite useful, but takes specific optimization to make use of.
 
I dont think people are arguing how big a difference it'll make, just saying the general architecture of the cards is still GCN. NCU is just a new iteration of the compute engines, which from what I understand, the biggest improvement seems to be the ability to split FP workloads more efficiently, which is quite useful, but takes specific optimization to make use of.

I understand and it definitely is a GCN evolution, though probably a much bigger evolution than has happened before. It's definitely an interesting one and with two computers being built in the next couple of months, I am waiting to see how Vega performs before I decide to go red or green.
 
I'm running an RX480 4GB with 1440p 144hz freesync monitor. I'm gagging for an upgrade. Even if Vega only matches the GTX 1080, or slightly below, I'll be happy.
 
I've recently upgraded to a 21:9 3440 x 1440 LG monitor with freesync so all I'm hoping for is that Vega will be powerful enough to run games at native res at decent settings within the freesync range.

I refused to pay the extra for a g-sync monitor as the prices are extortionate and the 1080ti is too expensive as well.

Currently running it off my GTX 970 which certainly isn't cutting it and obviously not freesync compatible though
 
If top Vega edges the 1080 in DX11 games and gets pretty close to the Ti in DX12/Vulkan then I'd call that a success. Better to look to the future than live in the past :)

(and my fave game 'The Division' has a pretty good DX12 implementation :D)
 
Status
Not open for further replies.
Back
Top Bottom