• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GTX 1060 Vs RX 480 - head to head showdown

I have said and still maintain that either will do you proud. I would be happy with either if I was gaming at 1080P in genuine honesty but the one thing that irks me is the notion that NVidia can't do DX12/Async etc and therefore the 480 is the better option. I just try to explain that things take time and not a few days either to get running and working well.

And I said a couple of days ago, there is a £160 price difference between the same sort of spec G-Sync and Freesync screen, so that really does make the Freesync + 480 combo look a really attractive buy.

Yes but you are the one who brought up DX11 games, so referring obviously to the strength of the 1060. You also said loads of games are coming.

I ask you again to provide a list

What I see so far is that all Microsoft titles are favouring the 480. They have some superb titles coming including those that perhaps always wanted Forza on PC. Of course as are several other big titles from other studios. Even BF dev stated earlier in the year via twitter going beyond 2016 they would be seeking to adopt DX12.
 
Last edited:
Yes but you are the one who brought up DX11 games, so referring obviously to the strength of the 1060. You also said loads of games are coming.

I ask you again to provide a list

What I see so far is that all Microsoft titles are favouring the 480. They have some superb titles coming including those that perhaps always wanted Forza on PC. Even BF dev stated earlier in the year via twitter going beyond 2016 they would be seeking to adopt DX12.

I am not going to provide a list and you would have to be daft to think that there isn't a big list of DX11 games coming. Even DX12 doesn't have me in anyway thus far impressed and very disappointed. I say again, even AMD's poster child DX12 game Hitman performs better on DX11 over DX12 for AMDMatt, so what does that tell you about the way DX12 is being used?
 
I am not going to provide a list and you would have to be daft to think that there isn't a big list of DX11 games coming. Even DX12 doesn't have me in anyway thus far impressed and very disappointed. I say again, even AMD's poster child DX12 game Hitman performs better on DX11 over DX12 for AMDMatt, so what does that tell you about the way DX12 is being used?

The truth is I do not know what games are coming in DX11 only. It is why I asked. I'm not concerned about Hitman but future titles and those that will fully use DX12 or Vulcan. If that is the future API the industry seems to be moving towards.

Your comment suggested otherwise but now your not willing to back it up...
 
You asked for Evidence, he provided.... So you must only want evidence that supports your view then? come on Greg :)

No, he is biased and look at anything AMD he does and it is wonderful but anything NVidia and it is bad and that is that.

The guy was sent a 480 from AMD, so let's be fair here at least.
 
The truth is I do not know what games are coming in DX11 only. It is why I asked. I'm not concerned about Hitman but future titles and those that will fully use DX12 or Vulcan. If that is the future API the industry seems to be moving towards.

Your comment suggested otherwise but now your not willing to back it up...

Neither do I and nice switch to DX11 only ;) You are getting so hung up on DX12 and missing the basics of what has been said.

Answer me this then. AMD work with Square on DX12 and lots of boasting about how good it is but why does DX11 score more frames over DX12 on AMD hardware? This is why I don't get so hung up on DX12 like others do.
 
No, he is biased and look at anything AMD he does and it is wonderful but anything NVidia and it is bad and that is that.

The guy was sent a 480 from AMD, so let's be fair here at least.

It is Roy Taylor fault. I remember when Kayle exposed roy ,that how he wanted to do the nano benchmark and he wanted some AMD twitter shills to do the benchmark instead of professionals. In that result, he deleted his account because he admitted his mistakes.

When Raja show cased Polaris he said that he wants "console quality FPS on PC" because AMD thinks that Console is superior then PC ,which they admitted. Some AMD fanboys think that it is everyone faults expect AMD, and they also think that it is Nvidia fault that AMD is facing finical crisis ,however, it is always AMD own fault by hiring non professionals and making sure that PC gamers hate them more and more.
 
Informative post. Makes a lot of sense with regard to the benchmark software.
I was looking at the same price range but decided on a Fury at £299 on here which I thought was a great price (I have a FreeSync monitor…)

All things being equal…
If you were coding for a game, as opposed to a benchmark, to be released on Xbox1, PS4, MS Windows (D12 or Vulkan). What would you likely be doing with the compute commands with regard to async compute, given the underlying hardware in the consoles is mostly identical, and how would that reflect on the Windows PC release?

Cheers. The fury X sounds like it was a good choice if you have a freesync monitor (and g-sync really is comparatively a bit of a rip-off imo), especially at that price. I just ordered a 1060 myself after sleeping on it and seeing one in stock at my price range, I really can't go over 250 or she'll kill me. :)

The async compute thing is interesting. I will confess I don't work on any console code, but you pick up some info from various presentations.

Regardless of platform though, async isn't a guaranteed performance boost. It really depends on the workloads. For example trying to async a bandwidth-heavy low-instruction-count compute shader could actually end up slower than running it sequentially on the direct queue. It could easily cause the graphics workload to suffer bandwidth problems, and/or thrash the cache and cause both queue's shaders to get excessive cache misses, and so use up further bandwidth to pull the missed data from memory again.

Anything designed for consoles is going to ensure it's async work plays nice with GCNs architecture. You'd just profile it and go from there when deciding whether to async those compute commands in that part of the frame, perhaps break a large compute kernel into separate kernels and partly async it etc. I would imagine the PC version would also consider the same issues on NVidia, but Pascal has only been available for a couple of months, so some of these early titles may not have considered that or even been able to profile the impact of async on NVidia until very recently (no real support on Maxwell for it). Fundamentally most compute passes designed initially for GCN will likely run ok on NVidia architectures, although perhaps not ideally. Still, you'd write for 64 thread wavefronts on GCN and those divide nicely into NVidia's 32 thread warps.

People might think devs are lazy but honestly time is usually very limited and game studios are infamous for going into 'crunch' to meet publisher dates (60-80 hour weeks). If the HLSL ports of your PS4 shaders don't cripple NVidia GPUs, some studios might not invest the time to optimise them to get that last 10% out of NVidia hardware. In the case of Vulkan where you can use GCN intrinsics, why not port those too if you have already GCN optimised shaders available from the PS4 version.

Ultimately any titles where consoles are a first class platform (not ported to after PC release) it's going to be easier to optimise for GCN on pc because that work is already done, in part. I don't want to mislead however, any studio that cares about PC as a platform will put serious effort into NVidia performance, simply because that's 70-80% of the PC market.
 
Do you have a link to where NVidia said it would be in the next driver please. I will give them some what for, as there has been 3? drivers and Pascal still isn't showing gains in Vulkan

I can't see how they ever will. This isn't something that can be solved with software.

I've read carefully what D.P. and fs123 provided and my take-aways are:

1) Maxwell had severe limitations that basically made EMULATING an async implementation via software prohibitively expensive. Going async over that architecture would just slow the card down. How NVidia ever claimed it can support it via drivers is beyond me. Unless they meant 'we can implement a DX12 driver for Maxwell (even though it will be crap and run slower)'. Yes sure...

2) Pascal has 2 important changes on this front. One is fast context-switch: changing from one task to another costs <100 micro-seconds in Pascal (in Maxwell you'd have to wait for an entire batch to finish, even up to 20 milliseconds). The second is something NVidia calls dynamic load balancing which means that cores can switch to compute tasks on the fly when they've finished their workload.

Now, the fast context switch means that you can actually implement async and have a chance to get some sort of speedup in Pascal (in Maxwell it's just plain impossible). The latter is the means through which you get the speedup: once the graphics work is done, the cores can switch to a compute task within the batch. However, the work must have been pre-staged like this and the switch is one-way until the next scheduled batch of work (wavefront/warp whatever you want to call it).

I can't see how they can beat the fact that GCN can actually perform a context switch in a SINGLE clock cycle (one instruction) and can do so for every individual SM/CU, and at any time, even picking up new work. This is done in hardware and it's part of the tradeoff: all those extra registers to hold state occupy die space and consume power. It's part of why GCN is not as power efficient as Maxwell/Pascal.

There are trade-offs. But as far as async is concerned it's just pure hardware limitation on NVidia's part. The thing about async API is that the game can keep feeding work to the card. It doesn't have to stop and wait. It can just keep streaming more and more work as long as the card can take it. GCN has the hardware to get that work to compute units immediately. Maxwell essentially can't do it (in any sort of meaningful way) whereas Pascal has a single trick up its sleave (dynamic load balancing).

I'm sticking with GCN for DX12. They async part will always give more gains than on Pascal.

Now, as far as features (CR/ROV) of DX12_1 are concerned... we'll have to wait and see.
 
No, he is biased and look at anything AMD he does and it is wonderful but anything NVidia and it is bad and that is that.

The guy was sent a 480 from AMD, so let's be fair here at least.

He actually shows how many of the tech sites have obviously not been properly doing their jobs being non biast. The Doom Open GL fiasco clearly shows Nvidia are leaning on them by not showing Vulcan benchmarks and in particular how toms selected title chosen are questionable to giving a balanced review.

Both brands have titles that benefit their own cards, obviously but when such brands start to influence tech sites/reviewers to sway the public from the whole picture then that is not a good thing.
 
I went Nvidia 1060 this time around. I was really hyped for the RX480 nitro and had a pre order but my partner surprised me with the £299.99 MSI gaming somethingsomething 1060. I realise its probably an overpriced card and what not and I 'might' have been better waiting but to be honest I'm not a fanboy of either company, for me it came down to wanting to get rid of my GTX460ti which was struggling with most of the games I wanted to play (if it could run them at all). Installed it yesterday after some moving around in my case and had a lovely night of playing Overwatch finally at 60FPS on near max settings (my other parts hold me back a little). I do have no idea how to use this MSI gaming app thing though so i'm probably missing out on some performance, I just clicked the OC mode and then got lost with the 3 different fan settings I can pick from :(.

Just wanted to share my story and I'm happy with what I ended up with, I don't think we really need to fight over which card is best, competition is healthy in this industry and we can just all buy what we afford and like. It seems a lot of people just bash the company and not the card in a lot of these posts which does make it very very hard for people to come here looking for information on which card actually suits their own needs.

Both cards look amazing for what they do, I would have been happy with either for me I ended up with the 1060, I may regret it but that doesn't mean it'll be unplayable on my machine anywhere in the next few years I can just lower settings and still have a great time playing all the new games :). I hope everyone gets the card they want and enjoys to continue playing great games in the future!
 
Yea AMD is doing way better in Vulkan and dx12 games. Theres no point in looking at dx11 benchmarks because those are a thing of the past now. All current cards can run dx11 good enough and we aren't going to see much more on that front.

Do people stop playing DX11 games now then as well?

Certainly not.

But given what you are seeing, can you in good conscience tell someone who intends to keep their mid-range card for a couple of years at least that the 1060 is worth it?

Pay a bit more and get 2GB less RAM for 5-10% extra FPS on their DX11/OpenGL games?

What about the DX12/Vulkan games they can enjoy better now. What about a year from now when they're split evenly? What about two years from now when DX11 games are just retro?

I stand by what I said: the 1060 is a better STOP-GAP card if you definitely upgrading again in 6-8 months. Otherwise go with the 480.

And not even that: wait for the Nitro+ and the other 480 partner cards. If they're as good as the 1060 even in DX11, then there's no reason at all to go with the 1060.
 
The problem with Vulkan for Nvidia is that AMD gave the team Mantle which the team then developed into Vulkan, I imagine Vulkan is always going to favour AMD which is fair in a unfair way.

Indeed however it will take time for NVidia to get to grips with it. AMD were working on Mantle for years before their own cards were using it. I sure hope it doesn't take NVidia that long though :D

Vulkan isn't mantle though? Sure Mantle helped speed up development but parts of it were removed and other parts rewritten. And Besides, the Kronos group is made of several different companies and they all contributed to Vulkan. A Nivdia guy is actually President of the group.

I really don't think Nvidia or AMD would be developing Vulkan if it offered an unfair advantage to the other.

Besides, Nvidia were first out with Vulkan drivers and Vulkan demo's. They were the first to show Doom using Vulkan.
 
I can't see how they ever will. This isn't something that can be solved with software.

I've read carefully what D.P. and fs123 provided and my take-aways are:

1) Maxwell had severe limitations that basically made EMULATING an async implementation via software prohibitively expensive. Going async over that architecture would just slow the card down. How NVidia ever claimed it can support it via drivers is beyond me. Unless they meant 'we can implement a DX12 driver for Maxwell (even though it will be crap and run slower)'. Yes sure...

2) Pascal has 2 important changes on this front. One is fast context-switch: changing from one task to another costs <100 micro-seconds in Pascal (in Maxwell you'd have to wait for an entire batch to finish, even up to 20 milliseconds). The second is something NVidia calls dynamic load balancing which means that cores can switch to compute tasks on the fly when they've finished their workload.

Now, the fast context switch means that you can actually implement async and have a chance to get some sort of speedup in Pascal (in Maxwell it's just plain impossible). The latter is the means through which you get the speedup: once the graphics work is done, the cores can switch to a compute task within the batch. However, the work must have been pre-staged like this and the switch is one-way until the next scheduled batch of work (wavefront/warp whatever you want to call it).

I can't see how they can beat the fact that GCN can actually perform a context switch in a SINGLE clock cycle (one instruction) and can do so for every individual SM/CU, and at any time, even picking up new work. This is done in hardware and it's part of the tradeoff: all those extra registers to hold state occupy die space and consume power. It's part of why GCN is not as power efficient as Maxwell/Pascal.

There are trade-offs. But as far as async is concerned it's just pure hardware limitation on NVidia's part. The thing about async API is that the game can keep feeding work to the card. It doesn't have to stop and wait. It can just keep streaming more and more work as long as the card can take it. GCN has the hardware to get that work to compute units immediately. Maxwell essentially can't do it (in any sort of meaningful way) whereas Pascal has a single trick up its sleave (dynamic load balancing).

I'm sticking with GCN for DX12. They async part will always give more gains than on Pascal.

Now, as far as features (CR/ROV) of DX12_1 are concerned... we'll have to wait and see.

That is a great summary!!
 
I am not out to convince anyone that one is better over the other. I just don't get why some are putting so much stock into DX12, when even on AMD hardware, DX11 is working better. Vulkan is part of FM's Time Spy and I got an 11% uplift from that with Async on as opposed to off, so why won't Pascal see an uplift in other Vulkan games?

People keep ignoring these basics and just keep telling me that in the future, the 480 is the better choice.
 
That still doesn't make sense. Will those people not be interested in what performance they will be getting in DX11 as well then? I would want to know how a card I am possibly purchasing will perform in DX11/DX12/OGL/Vulkan and get as much info as possible. There is a whole stream of games coming over the next 3 years and they will mostly be DX11.

But we know that already. In most cases the reference RX 480 is less than 10% slower. The partner aftermarket 480s will probably close that to almost equal performance. But let's say an aftermarket partner 1060 (e.g. MSI Gaming) is still 5-10% faster.

Do you really find this so important as to recommend the 1060 over the 480 for someone who is on a 3-year upgrade cycle? People who will play more DX12/Vulkan games than DX11 games over that period? People who will sell their cards in 2018 when there's very few DX11 titles?

I really don't understand this logic.
 
Back
Top Bottom