• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Vega refresh in 2018, current Vega is broken!

They cant even meet Vega demand, never mind a refresh already.

That might be because GF's 14nm really ain't that great (there is a reason why you can undervolt the average card quite a bit). Refreshed on a new node might actually improve things including yields/supply.
 
That might be because GF's 14nm really ain't that great (there is a reason why you can undervolt the average card quite a bit). Refreshed on a new node might actually improve things including yields/supply.
Recent comments from an XFX rep suggest its the HBM availability again.
 
Prim shaders are active, as the way Vega works now in hardware requires it. But the thing that is not active is NGG fast path. This is still being worked on as the driver is being rebuilt from the ground up. Part of the work on this is shown by the hiring of a new lead shader compiler programmer.

Once the above is done, the DSBR will also be able to operate with the new Work distributor, allowing the ROPs to remain constantly filled with tiles from any screen space coordinates instead of the native quadrants they cover.

Both should provide a reasonable if not considerable performance bump when fully active. At the moment the DSBR and Prim shaders are only working in native mode, which is why Vega is no better than Fiji with higher clocks. The DSBR when active is reducing required bandwidth and power, but working sub optimally as it cannot make use of the new work distribution engine without NGG also being active.

There is no problem with the hardware itself, the driver side of things is just taking considerably more work.

Not very good

AMD knew what the hardware was going to be years ago so if there is a software problem now, what more can I say but laugh.

Having said that I don't think AMD are that stupid but they have been let down by HBM.
 
There is no problem with the hardware itself, the driver side of things is just taking considerably more work.

Interesting if true, but it's a much of a muchness if there is almost no availability. Sure existing owners will see benefits, but improved performance is not going to make more sales when AMD can't supply, and what little there is is so constrained, that price gouging is putting the prices up into silly territory.
 
Not very good

AMD knew what the hardware was going to be years ago so if there is a software problem now, what more can I say but laugh.

Having said that I don't think AMD are that stupid but they have been let down by HBM.

You can only do so much with simulations, you need hardware to get things working properly. Nvidia had a whole year or more to get the Driver side scheduling working properly for Maxwell 2 before it launched. Since they tested it all with MAxwell 1 in the form of the 750ti. AMD have not had the luxury due to financial reasons.

The sheer scale of changes they are making is requiring a massive rewrite and restructuring of their GPU drivers overall. This work is coinciding with the restructuring of their drivers to make a large portion of it platform agnostic, the first parts of this work can be seen in the AMDVLK driver for linux.

There is nothing wrong performance wise with HBM, at any resolution. AS i said before, it was already shown to you multiple times that the issue was not related to the memory apart from Ram limits with fiji. With Vega the performance scales as it should with resolution as the memory limit is no longer there.


Interesting if true, but it's a much of a muchness if there is almost no availability. Sure existing owners will see benefits, but improved performance is not going to make more sales when AMD can't supply, and what little there is is so constrained, that price gouging is putting the prices up into silly territory.

Much of the work they are doing now is not just for Vega, it will be for Navi and onwards. With NGG fast Path and primitive shaders they are essentailly removing the limitations of some of the last bits of fixed function pipeline and replacing it with a highly programmable pipeline that combines many of the fixed function stages.

The availability also depends on Fab constraints and other parts of the supply chain. From recent tweets it seems like there is more than a single chokepopint at the moment. One coming from HBM demand and the other from Overall Demand for Vega parts.
 
You can only do so much with simulations, you need hardware to get things working properly. Nvidia had a whole year or more to get the Driver side scheduling working properly for Maxwell 2 before it launched. Since they tested it all with MAxwell 1 in the form of the 750ti. AMD have not had the luxury due to financial reasons.

nVidia has a 1:1 simulation of the hardware - not sure what it is up to now but it was almost two large rooms of servers - obviously you can't account for things like semi-conductor field effect, changes due to thermals, etc. but allows for pretty much everything else to run as if the actual hardware. Not sure if AMD has anything like that. So that was probably used to get stuff working for Maxwell 2 before it existed.
 
nVidia has a 1:1 simulation of the hardware - not sure what it is up to now but it was almost two large rooms of servers - obviously you can't account for things like semi-conductor field effect, changes due to thermals, etc. but allows for pretty much everything else to run as if the actual hardware. Not sure if AMD has anything like that. So that was probably used to get stuff working for Maxwell 2 before it existed.

There was a video about that, from one of the tech sites, it was very clever what they could do to accurately simulate new cards before they got the actual silicon in house.
No idea if AMD use something similar.
 
Last edited:
There was a video about that, from one of the tech sites, it was very clever what they could do to accurately simulate new cards before they got the actual silicon in house.
No idea if AMD use something similar.

There was a comment in the Phoronix forums about this, I'll try and dig it up if I can. IIRC, they didn't use to, but there have been recent developments in this area; I get the impression that even if they have caught up with the simulator, they've got some way to go to truly integrate it into their processes to fully benefit from it.

I'd expect such capability to be very useful in not only driver development, but also assist modelling future architectures and assessing technical strategy. Long gone are the days when we could expect new entrants into the consumer graphics space, such is the level of technical complexity.
 
There was a comment in the Phoronix forums about this, I'll try and dig it up if I can. IIRC, they didn't use to, but there have been recent developments in this area; I get the impression that even if they have caught up with the simulator, they've got some way to go to truly integrate it into their processes to fully benefit from it.

I'd expect such capability to be very useful in not only driver development, but also assist modelling future architectures and assessing technical strategy. Long gone are the days when we could expect new entrants into the consumer graphics space, such is the level of technical complexity.

Yes, the simulator is used not just for driver development pre-silicon but to do detailed performance analysis and testing, especially of future architectures. This is one reason why nvidia GPUs have been very well balanced recently without too much bottlenecking.
 
Back
Top Bottom