• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
It doesent really matter to AMD if big Vega makes sales in gaming. It is so bafling how people dont understand that. What would be disaster for Amd would be if it isnt competitive in compute and dont sell. Vega Fe already compites with P6000 in compute and that is where the money is. HMB is great architecture unlike so many here seem to think. It doesent bring that much on gaming side but it brings a lot on compute side and once again that is where the money is. Vega not being competitive in gaming isnt disaster for AMD its disaster for consumers. People really seem to think AMD is begging on their hands to make little money per gpu on gaming side when they make thousands per gpu selling that same cards to compute. It seems they made Vega same way Zen is made, for compute/pro market and sell it on consumer side hoping it would be competitive.


Vega FE lack serious FP64 support so for most HPC applications it isn't even a consideration against the Pascsal GP100.

The double rate FP16 aupport should do wel for deep-learning applications but AMD has a massive issue with software compatibility and support here. The ML world, similar to HPC, is heavily focused on CUDA support. AMD have talked about adding support to some popular software like TnesorFlow IIRC but it is hard to imagine Vega will be really successful here in the short term.

The other major issues for HPC and deep-learning is power efficiency. Consumers may or may not are but the server farm market absolute does care because increased power means a lot of additional runnign costs, not just in electricity to power it but cooling. Google went so far as developing their own dedicated hardware for TensorFlow, not because of the performance of Nvidia GPUs, but thy could achieve better performance per watt. Vega is FE is well Nvidia right now, and Volta's GP100 dedicate tensor cores absolutely destroys Vega for deep-learning tasks and additionally offers for 1:2 FP64 support.


If Vega was so eavily focused on compute ot would be a very different chip, it would actually have compute focused feature like 1:2 FP64 support. It wouldn't have dedicated silcon for Tiled rendering or new geometry processors.
 
atm I'm thinking just keep my 290..?
it is a shock .. :( .. whens navi out ?
2019 and Volta AIB/custom is prob next summer. Its a retake of the H1 thing, people want to say December for Volta but I dont think so and they dont have need to rush like that. The smart buy to me here is some kind of 2.0 for Vega, sure Navi will be greater but also your gfx will be ancient By 2019 your'll be like me on low settings muttering to yourself 24fps is all the human eye can see.

If there is a little vega I'll go for that, its easy for me as I dont need/want anything beyond 1080p

R8HZwfj.gif

http://gizmodo.com/why-frame-rate-matters-1675153198
 
Hey Gregster, sure the initial 'results' for big desktops are not great but as I already pointed to earlier in the thread, the main thing AMD needed to fix is competition in the laptop space. With a 1080+Intel quad+Gsync Alienware laptop costing the thick end of £3k inc. (razor equivalent is £4k) I really am desperate to see AMD to launch anything in the same ballpark as the 1080. They have the CPU, the display tech and everything they need to get a competitive mobile product out there for 2018.

Vega never did and does not need to be 1080ti or Volta speed because mobile requirements are different *unless* you really need the best possible mobile VR experience and I can totally understand this as an edge case.

The only disappointment I have is that the Vega thermals appear so big that if they stick a full fat gaming Vega into a laptop that it will throttle pretty hard and also, in terms of timing, they might be up against Volta once the AMD mobile solution is launched.

I really don't want to be paying so much money for my laptops and this is down to Nvidia and Intel having no competition. I can't wait for Vega to launch and I would be aghast if it would have been cancelled.
I see your point but the heat that will be generated by Vega is kind of worrying. I hope they have lower voltages at least, as has been done on the FE I see and this keeps temps down but for a laptop, I would be a bit worried.
 
Vega FE lack serious FP64 support so for most HPC applications it isn't even a consideration against the Pascsal GP100.

The double rate FP16 aupport should do wel for deep-learning applications but AMD has a massive issue with software compatibility and support here. The ML world, similar to HPC, is heavily focused on CUDA support. AMD have talked about adding support to some popular software like TnesorFlow IIRC but it is hard to imagine Vega will be really successful here in the short term.

The other major issues for HPC and deep-learning is power efficiency. Consumers may or may not are but the server farm market absolute does care because increased power means a lot of additional runnign costs, not just in electricity to power it but cooling. Google went so far as developing their own dedicated hardware for TensorFlow, not because of the performance of Nvidia GPUs, but thy could achieve better performance per watt. Vega is FE is well Nvidia right now, and Volta's GP100 dedicate tensor cores absolutely destroys Vega for deep-learning tasks and additionally offers for 1:2 FP64 support.


If Vega was so eavily focused on compute ot would be a very different chip, it would actually have compute focused feature like 1:2 FP64 support. It wouldn't have dedicated silcon for Tiled rendering or new geometry processors.

True. I think had to go with what they had. Navi is propably going to be real card for compute with FP64. But the point here is when it costs you same to make RX Vega and Instinct Mi25. Other is 400-500 and the other is 3000(?) thousand, it doesent take a genius to think what do they rather sell. Even after cetification cost and other cost on compute side they would have to sell a lot of RXs to make even with one sold Mi25. It doesent really compete with Nvidias compute cards but its still going to sell some. And I bet their are going to make piles more money out of them than RX Vegas.
 
Of course its true, they know they've got buckleys of selling many to gamers, as we're past Vega now, its old hat, we've had much better performing cards, that use way less power than it, for over a year now.

It's not though as it has the most comprehensive gaming feature set. It's a gaming card as a lot of other compute based architectures have been. It might not be as fast as NV's gaming only cards but it will still game well.
 
Yup, i said it pages back, they don't care how many they sell to gamers, as they aint for gamers now, they are for people who mainly do VR, pro, compute etc... stuff, that like to do a bit of gaming too, people who wont care about running their games fully maxed out, ultra settings, and at umpteen fps.

If all you do is game, and nothing else, then AMD isn't for you, Nvidia are the ones for you, as they are the only ones who do cards for just gaming, and thats their GeForce range.
internet sarcasm :D



anyone got any idea on rx vega supplies.. someone working for a distributor/big time datacenter/or amd.. who would like to share the load? :D

btw the event looked sleazy.. and those emcees were condescending

Edit: Font change
 
Last edited:
You are on the money. But the same GPU is used on all sides Instinct, FE and RX. That is exactly why AMD has had to put everything in compute, I really dont think there has been that many software guys working on RX drivers because all of them have been building compute side of things trying to close that gap on Cuda.

i am just being practical here.. even in deep learning a gimped fp64 is not going to work
if a computer is unable to invert sparse matrices.. it cant do most of the critical computational tasks associated with a large panel of variables

maybe in the future people will try to optimize their IT spend by specializing tasks for nvidia and amd units.
the nvidia gpu will be used for all transformations involving matrices
the amd gpu will run monte carlo simulations
everyone will be happy with their capex control :D
 
I remember there being several events as part of this road tour. Budapest done, what's next? Is it going to be more cloak and daggers?
 
I just love how they keep derailing this Hype train of Poor Volta... **** even Pascal is not poor :D
 
So it was mentioned numerous times that people felt the left system performed better in the blind test. Later on due to a little bit of missed redaction, the left system was discovered to be using an asus designo monitor, which is a freesync one.
ASUS-Designo-Curve-MX34VQ.jpg

ASUS-Designo-Curve-MX34VQ, note the text on the side and shape of the back.


picture of the left system at the event, note the text on the side and shape of the back.
 
Wouldn't be a problem if the miners hadn't bought up all the high wattage PSUs as well.
At least the power draw of vega will hopefully keep it out of the hands of miners though...
 
I hate when I forget to login to read this thread. It means my ignore filter isn't applied and regretfully Loadsamoney's trolling posts that add nothing are visible.
 
Last edited:
Wouldn't be a problem if the miners hadn't bought up all the high wattage PSUs as well.
At least the power draw of vega will hopefully keep it out of the hands of miners though...
Why do i keep seeing this? Miners will undervolt them. Its not a matter of pure clock speed and it seems Vega FE undervoltage quite a bit. It's only when overclocking Vega does the power go crazy. Its the same with my 1080ti. Sits nicley at stock being a good old 250watt TDP card but add 150Mhz overclock and the things wants to suck a extra 50 watts of power. So for 150Mhz it wants around 1 fifth more power. Power efficiency goes out the window it seems with higher frequencies atm.
 
Status
Not open for further replies.
Back
Top Bottom