• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: The Vega Review Thread.

What do we think about Vega?

  • What has AMD been doing for the past 1-2 years?

  • It consumes how many watts and is how loud!!!

  • It is not that bad.

  • Want to buy but put off by pricing and warranty.

  • I will be buying one for sure (I own a Freesync monitor so have little choice).

  • Better red than dead.


Results are only viewable after voting.
AMD have publicly announced that these features are working in the RX Vega drivers.


The problem is certain AMD fans are just in disbelief at the current situation and are trying desperately to find excuses or keep their hype train rolling. If important features aren't enabled now then they never will be, so it is also an irrelevant question for a performance perspective. Some people mistakenly think that tehse features will add 30+% performance when the reality is individually they give a few percent and even that will be very dependent on the current scene.

The unfortunate aspect of computing is that you are always limited by the slowest tasks, and even when you speed up a task significantly, e.g. by 50%, the overall speed increase is small because other factors become more limiting.
Rendering a scene might invovle 10,000 computational tasks say (exact number irrelevant, but there are lots. Most takes contribute a tiny fraction of a percent to the rendering time. The worst offending tasks that are the slowest might take 4% of the rendering time. You do some sophisticated optimizations in hardware or software and twice as fast, well now your frame rate has improved by the whole of 2%.


If they have publicy done so feel free to link it
 
I find that an odd statement when the main part of these features is more performance for the same workload, prime example of this is when bethesda broke gameworks and suddenly nvidia slide down in performance, if that was simply for power saving then performance would not have taken a hit as it did.

If you are handing one part of the GPU a workload that is making it run at 100% to produce an output but the same output can be produced with some simple filtering while reducing the input to that stage to a level where that part of the GPU is only working at 60% to produce the same output but the next stage is still working at the same rate to render that on screen you don't get any straight up performance improvement but you just saved a ton of power - the final output stage might be seeing reduced utilisation due to the stage before it being overworked to some degree though so you might be able to pick up some say 5-10% performance increase as well - if you break something and are sending twice as much input to that stage completely overloading it you might be making the last stage wait quite a bit for the same output bringing performance down a lot.

There might be some cases where this opens up the possibility of increasing the quality of the input data, resulting in higher quality output without additional performance loss as well.
 
It's a bit of a gamble from AMD isn't it, that devs will start to spend time, money and resources using these features that will benefit such a small percentage of customers. It's not just Nvidia domination of the market, it's also that older AMD cards also won't benefit (at least not from all of them).

It seems Nvidia makes cards that do what developers currently do perform well and AMD make cards that will perform well if developers start doing things specifically to make the game work well on AMD products.

The last i seen it was around 28% market share. it's hardly small. Smaller than Nvidia by a large margin but close to 30% is not small at all.
 
If you are handing one part of the GPU a workload that is making it run at 100% to produce an output but the same output can be produced with some simple filtering while reducing the input to that stage to a level where that part of the GPU is only working at 60% to produce the same output but the next stage is still working at the same rate to render that on screen you don't get any straight up performance improvement but you just saved a ton of power - the final output stage might be seeing reduced utilisation due to the stage before it being overworked to some degree though so you might be able to pick up some say 5-10% performance increase as well - if you break something and are sending twice as much input to that stage completely overloading it you might be making the last stage wait quite a bit for the same output bringing performance down a lot.

There might be some cases where this opens up the possibility of increasing the quality of the input data, resulting in higher quality output without additional performance loss as well.


Its a bit deeper then that and im pretty sure you know it. Its not simply a point of well if doing it this way makes it 40% more efficent at that job thus saves that power, its well we save this % doing this job which then makes the next job faster. By using an algorithm we know these triangles do not need tobe rendered because you cannot see them then its not simply that that job is done fast, its that the job itself is smaller before it even hits the pipeline. Its about having assets stored in easier accessed memory instead of pulling it from slower memory. Its all about working smart not working harder. that is where nvidia get there main edge from, the ability to shuttle work about to feed the gpu in a specific way to get the jobs in and out as fast as possible.


Edit, and also to add, most of these features that people are saying dont mean much, the main reason these even came into existance is to fight against gameworks, so if you think they wont have an effect then sure feel that way.
 
Last edited:
Not even sure if that's an option? Its set to amd optimised in the control panel.

Personally most of the time I set override to x8, sometimes x16 in control panel and I always get better performance than the AMD optimized setting, and I can't personally tell the difference between AMD optimized and 8x, especially while playing, and especially in 4K (I don't play BF1 though), but of course I don't expect everyone to share the same experience as myself, but in all cases tessellation kills Fury X performance and AMD optimized isn't always the best option from my experience
 
Yea because i meant RX Vega had 28% market share. What's the point in either vendor bringing new features as the newest graphics cards always have the lowest percent compared to previous gens especially at the high end.
I was obviously referring the the market share of RX Vega being small, so why would you say it's 28%? If you were referring to AMD's marketshare it seems a bit out of context to quote my post as it has nothing to do with it.
Expecting devs to add features that can only be used by RX Vega is expecting a lot. Better for them to wait until 2 or 3 generations of cards have it.

Thinking they'll go to the effort of adding the new AMD tech for little gain themselves but so AMD cards look better and AMD do better might be expecting a bit much of companies out to make a profit. Maybe if AMD partner with them and help them or pay them to do it, that's what Nvidia do isn't it?

Even if the devs did decide to, there's likely to be significant lag time between AMD releasing the technology and it being included in games. We might be anticipating the release of next gen cards by then, at which point who's really gonna be looking to by RX Vega?

So I'm not saying AMD shouldn't add these technologies, but expecting games to support them close to release is a bit much. So talking about the 'lack' of performance as being down to devs seems unfair.
 
Its a bit deeper then that and im pretty sure you know it. Its not simply a point of well if doing it this way makes it 40% more efficent at that job thus saves that power, its well we save this % doing this job which then makes the next job faster. By using an algorithm we know these triangles do not need tobe rendered because you cannot see them then its not simply that that job is done fast, its that the job itself is smaller before it even hits the pipeline. Its about having assets stored in easier accessed memory instead of pulling it from slower memory. Its all about working smart not working harder. that is where nvidia get there main edge from, the ability to shuttle work about to feed the gpu in a specific way to get the jobs in and out as fast as possible.


Edit, and also to add, most of these features that people are saying dont mean much, the main reason these even came into existance is to fight against gameworks, so if you think they wont have an effect then sure feel that way.

It would take a far longer and much more researched post on my part to do it justice - reducing overdraw and more costly late stage culling doesn't necessarily equate to significantly more performance though with the way GPUs work. (Some stuff like packed maths is another story).

Also nVidia have mostly just gone for a brute force method - their tiled rendering for instance is doing nothing particularly sophisticated, just makes raw use of their compute capabilities hence why it works with little requirements for compatibility while AMD has gone for a more elegant solution but requires a lot more work from developers - nVidia also moved away from processing these things using discrete hardware blocks quite awhile ago while AMD was still doing most of them using blocks dedicated to specific tasks until Vega - meaning nVidia could scale these up performance wise with both shader count and clock speed while AMD was stuck with just increasing the performance of these functions via clock speed on GPUs based off the same core.
 
I was obviously referring the the market share of RX Vega being small, so why would you say it's 28%? If you were referring to AMD's marketshare it seems a bit out of context to quote my post as it has nothing to do with it.
Expecting devs to add features that can only be used by RX Vega is expecting a lot. Better for them to wait until 2 or 3 generations of cards have it.

Thinking they'll go to the effort of adding the new AMD tech for little gain themselves but so AMD cards look better and AMD do better might be expecting a bit much of companies out to make a profit. Maybe if AMD partner with them and help them or pay them to do it, that's what Nvidia do isn't it?

Even if the devs did decide to, there's likely to be significant lag time between AMD releasing the technology and it being included in games. We might be anticipating the release of next gen cards by then, at which point who's really gonna be looking to by RX Vega?

So I'm not saying AMD shouldn't add these technologies, but expecting games to support them close to release is a bit much. So talking about the 'lack' of performance as being down to devs seems unfair.

Tell that to the game devs Ubi and Bethesda who are seemingly using Vega's features.
 
Last edited:
It would take a far longer and much more researched post on my part to do it justice - reducing overdraw and more costly late stage culling doesn't necessarily equate to significantly more performance though with the way GPUs work. (Some stuff like packed maths is another story).

Also nVidia have mostly just gone for a brute force method - their tiled rendering for instance is doing nothing particularly sophisticated, just makes raw use of their compute capabilities hence why it works with little requirements for compatibility while AMD has gone for a more elegant solution but requires a lot more work from developers - nVidia also moved away from processing these things using discrete hardware blocks quite awhile ago while AMD was still doing most of them using blocks dedicated to specific tasks until Vega - meaning nVidia could scale these up performance wise with both shader count and clock speed while AMD was stuck with just increasing the performance of these functions via clock speed on GPUs based off the same core.


Well thats mainly getting into the nuts and bolts of gcn, thats a to long didnt read so lets not bother :D , but where nvidia hit the golden goose was years ago when they nailed down there scheular. Now no one on this planet can do anything but kudos guys kudos for that. but the main problem amd have faced for years and have been very vocal about for years are nvidias tactics with gamesworks. AMD could make the fastest card on the planet, god could bless it and jesus could be its husband and still nvidia ( which they did at one point ) would still use gameworks to make the nvidia stuff faster. Doing the stupid tricks like using zillions of tesselation triangles to simply swamp down an amd gpu, forcing physx work onto a cpu if its running amd gpu etc etc etc. End of day nvidia are just as bad at intel for these shenanigans.

So about 4/5 years ago amd sat down and said enough is enough, they messed with mantle, that died and from its ashes came vulkan, but that was only part of the problem. They needed the cards to have a brain to actually look at something and cull the random trash that was holding there cards back, its these features that are on vega that amd have been talking about all this time.

And we might not see huge jumps from these features in vulkan games, its the case in point we should see a difference in gameworks games. The rest is finewine.
 
Last edited:
In every game?
Maybe they're partnered?

Also, that's 2 (admittedly large) developers/publishers, out of how many?

EA and Dice star wars battlefront 2 will be optimised for Ryzen and Vega.

Tbh AMD seem to have a much bigger/better partnership with developers vs what Nvidia have.

Considering AMD are at a much low market share they have done extremely well get support.

Amd are in a much better situation now for getting new features into games than they was say with the launch or R9 200 series and below.
 
EA and Dice star wars battlefront 2 will be optimised for Ryzen and Vega.

Tbh AMD seem to have a much bigger/better partnership with developers vs what Nvidia have.

Considering AMD are at a much low market share they have done extremely well get support.

Amd are in a much better situation now for getting new features into games than they was say with the launch or R9 200 series and below.
FarCry 5 and the next Wolfenstein will use FP16 too.
 
EA and Dice star wars battlefront 2 will be optimised for Ryzen and Vega.

Tbh AMD seem to have a much bigger/better partnership with developers vs what Nvidia have.

Considering AMD are at a much low market share they have done extremely well get support.

Amd are in a much better situation now for getting new features into games than they was say with the launch or R9 200 series and below.

Probably due to AMD console tech so transitioning to from pc to console should be easy due to the similarities in technology?
 
Back
Top Bottom