• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
What are the chances on RX Vega out performing (not matching but exceeding) a 1080Ti in:
- Dx11
- Dx12
- Proper/optimized Dx12
- Vulkan

I know its impossible to say with certainty, but a gut feel...
 
What are the chances on RX Vega out performing (not matching but exceeding) a 1080Ti in:
- Dx11
- Dx12
- Proper/optimized Dx12
- Vulkan

I know its impossible to say with certainty, but a gut feel...
It is possible I think. But no one knows. People who say it is not possible are talking out their backsides IMO. It might be okay to say possible, but improbable however :p

Price for performance is what is important to me. I hope that is what they get right most.
 
Well, nVidia released a Ti with more Cuda cores than expected (3384 I think was rumoured for a long while), so I'm taking that as an indication that they know Vega will compete :)
 
What are the chances on RX Vega out performing (not matching but exceeding) a 1080Ti
Why is it silly to expect more. I must be silly then as i expect Vega to compete in some way or fashion with the 1080ti and i suspect Nvidia do as well seeing the price looks to be better than most thought. Although i said it would be $700-800 and not the $1000 some were predicting.
Just recalling Fury and 980Ti situation. On paper an equal product, in reality it took two years to reach same performance.
And Vega is in a worse position, because Pascal simply clocks better.
 
Just recalling Fury and 980Ti situation. On paper an equal product, in reality it took two years to reach same performance.
And Vega is in a worse position, because Pascal simply clocks better.
We don't know if Vega is in a worse position, it is a new architecture. As for pascal clocking better, it is not the same clock for clock with maxwell from what I recall.
 
Just recalling Fury and 980Ti situation. On paper an equal product, in reality it took two years to reach same performance.
And Vega is in a worse position, because Pascal simply clocks better.

So you got your hands on a Vega sample then?. IT is funny as there is a thread saying they getting dx12 drivers with up to 16% increase with Pascal so they doing the same as Amd.
 
Just recalling Fury and 980Ti situation. On paper an equal product, in reality it took two years to reach same performance.
And Vega is in a worse position, because Pascal simply clocks better.

Pascal is an Average Overclocker in comparison to Maxwell and past cards. If your card comes out of the box boosting at 1800mhz and achieve a stable boost of around 2100 you are only looking at a 16% OC.

A gtx980ti boosting at 1200mhz and achieving 1500mhz is 25%.

Remember there were cards like the 7950 that came with a 800mhz core clock and could do 1200mhz which is a 50% overclock.

So Vega could easily be a better clocker as Pascal is already pushed more than Maxwell hence the lower overclocking. Who knows as Vega is a mystery at this moment.
 
Just recalling Fury and 980Ti situation. On paper an equal product, in reality it took two years to reach same performance.
And Vega is in a worse position, because Pascal simply clocks better.

Pascal is a terrible overclocker, which is further castrated by the 1.093v power max and throttling from 22C
 
Pascal has very high boost clocks out of the box so over clocking will not give you a large % over the boost clocks.
Which was a smart move for Nvidia as reviews showed good performance.
 
What are the chances on RX Vega out performing (not matching but exceeding) a 1080Ti in:
- Dx11
- Dx12
- Proper/optimized Dx12
- Vulkan

I know its impossible to say with certainty, but a gut feel...

I doubt they will do so well with DX11. DX12 / Vulkan however has had AMD at it's heart since conception and as such should put up a good fight against Pascal which to my mind really is little more than a die shrunk Maxwell. VEGA has been designed to run DX12 / Vulkan.
 
I doubt they will do so well with DX11. DX12 / Vulkan however has had AMD at it's heart since conception and as such should put up a good fight against Pascal which to my mind really is little more than a die shrunk Maxwell. VEGA has been designed to run DX12 / Vulkan.

There is a decent chance in all but it gets more likely as you go from top to Bottom. Vega looks like a true Dx12 next Gen architecture while Pascal is still for me a dx11 architecture with some dx12. I expect Vega to be fast but this is the world of PC hardware where disappointments happen a lot.
 
We don't know if Vega is in a worse position, it is a new architecture. As for pascal clocking better, it is not the same clock for clock with maxwell from what I recall.
and adding to this AMD has taken a similar approach with Vega as they did with Zen, and are implementing features from their biggest competitor: https://www.techpowerup.com/231129/on-nvidias-tile-based-rendering and this could very well end up giving AMD an edge.
 
Well the large Vega is meant to be 12 Tflops that tells you something if it is true.

It tells you the card has legs but Tflops has not been AMD's problem for the last 4 years. There CPU overhead has crippled every GCN card since the 7970 so unless they have someway of resolving this I they will be forever behind until DX12/Vulkan becomes the norm.
 
It tells you the card has legs but Tflops has not been AMD's problem for the last 4 years. There CPU overhead has crippled every GCN card since the 7970 so unless they have someway of resolving this I they will be forever behind until DX12/Vulkan becomes the norm.

Utilization in the real world is something AMD claim to have improved hugely.
 
It tells you the card has legs but Tflops has not been AMD's problem for the last 4 years. There CPU overhead has crippled every GCN card since the 7970 so unless they have someway of resolving this I they will be forever behind until DX12/Vulkan becomes the norm.

So since when is Vega the same as fury/390 it is like saying kepler is the same as maxwell.
 
From what we know AMD appear to have tackled many of the bottlenecks that have held previous GCN cards back, particularly in DX11 which will continue to be important. My only concern is that Vega appears to be a hybrid gaming/compute card but I don't have any idea how much that compromises the gaming side.

Given the complete architectural rework (and AMD claim an emphasis on increasing both shader core IPC and clockspeed) I don't think it's impossible for them to release a card that is roughly equivalent in size to the Ti but faster. And then there is HBM2, which is still big unknown since (IMO anyway) comparisons to Fiji aren't really representative.

So I'm optimistic basically. It really depends on how good their execution is and while they have disappointed before at least Zen shows that there is still some hope.
 
Utilization in the real world is something AMD claim to have improved hugely.

Don't get me wrong, there are games like Doom, Hitman and AOTS that shows what GNC is capable of and AMD have shown there able to pull some more performance from driver improvements but when it comes to driver improvements.

So since when is Vega the same as fury/390 it is like saying kepler is the same as maxwell.

It remains to be seen but they GCN is designed it's built around the newest API's. When it comes to DX11 (although current is old hat now so don't expect massive gains from AMD on this front) for AMD to match Nvidia in DX11 they would need to redesign their hardware to support driver deferred command lists which means game code can multi threaded through drivers which massively improves performance. That's not to say DX11 games can't be multi threaded as both companies support driver command lists which is a standard with DX11 but in order for that to work it needs to be done in the game code (please note I may have mixed up the use of deferred commands and driver commands). I think you can guess at which option most games companies will take which puts a burden onto the AMD and Nvidia driver teams, Nvidia has the resources and the sales to support effectively re-writing game code to make it more efficient as they built deferred command into their hardware's front end, AMD on the other doesn't support even if they had the driver to do what Nvidia does and as result when a GCN card makes a draw call from the CPU it all gets loaded onto 1 CPU core and has to wait until that command is finished before making another.

Put it another way, last year the Geforce 970 went head to head with the Radeon 390, both were priced at a similar levels and when there was a new game on the market they were often pitched against each other. On paper the 390 should have flattened the 970; bigger die, more cores and a much higher power draw, instead whilst the 390 was a bit faster especially once drivers were updated they were largely neck and neck most of the time. The is true with the 1060 vs the RX480, sure they match but the 1060 doesn't need the raw horse power of the RX480 to keep up as it's efficient with the way it handles instructions and sending them to the CPU.

I can't really see this changing until the new API's become the norm and DX12 and Vulkan are available from release on new titles rather then patched in latter down the road.
 
Nvidia has the resources and the sales to support effectively re-writing game code to make it more efficient as they built deferred command into their hardware's front end, AMD on the other doesn't support even if they had the driver to do what Nvidia does and as result when a GCN card makes a draw call from the CPU it all gets loaded onto 1 CPU core and has to wait until that command is finished before making another.

Goes beyond that - nVidia are hooking some DX11 functions when called and replacing them at runtime so that they interface more optimally with their drivers/architecture and allows them to implement threaded calls to the driver within the function as well instead of trying to pickup the pieces within the driver itself.
 
Status
Not open for further replies.
Back
Top Bottom