• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Radeon RX 480 "Polaris" Launched at $199

Surrrre.

980 -> 480 is such complete insanity that the only words to describe it will get me yet another ban.

Only on here, only on here....

Insantity? That's way over the top. The reference card isn't that far behind the 980. At 1080p techpowerup puts the RX 480 at 100% and the 980 at 110%. In some games it's faster.

So please tell me how getting a faster AIB RX 480 is insantity? He isn't losing money. For DX11 games it will be a sidegrade overall, for dx12 games it will be an upgrade overall. And we know that AMD get improvements from drivers as time goes on.

What we know is that AMD cards are stronger than Maxwell cards in Dx12. He has decided that that's the important thing for him.

And lastly, did you even read his post? He is starting to dislike Nvidia, so performance isn't the only reason for the change.
 
Ref DX12 according to Steam stats only 41% of users have the prerequisites to fully utilise DX12 i.e. hardware that supports it and Windows 10. I can't see a huge surge in the development of DX12 titles whilst the majority of gamers are unable to utilise it, I think it's going to be a while before DX12 becomes mainstream.
 
yes ofc a lot of games wont have DX12; mostly indie and small projects, thats why i said Triple A titles, then you post to me an interview with a guy talking about unity's integration of DX12, who saying that for now the benefit is from cpu bound situation, and you build up on that, you forget the part where he says async compute is not integrated yet to unity, so he doesnt even have the performance numbers gain from any testing to even talk about, he also points out that only AMD have efficient utilisation of async, meaning that the major wall standing in front of it's adoption is Nvidia.
and yes i do believe consoles will end up with mgpu, have yet to listen to your argument to why this is such a ridiculous idea...
Maybe you've yet to listen to my argument about that, but I've certainly made one a couple times already. Not my fault if you plug your ears and go "NA NA NA NA NA NA".

If I have to do it again though - multi-GPU development is a PAIN. Both Sony and Microsoft have spent a whole lot of effort with the PS4 and XB1 architectures and API's to create something that is easy to work with. Pleasing developers is really a top priority for them because it dictates the willingness to get developers onboard and how many titles are made for them. Sony learned this lesson with the PS3(and the PS2, but less so because it was in the dominant market position for other reasons). In the end, there is no good reason to do multi-GPU when single GPU's are as efficient and powerful as they are already and are 10x more convenient and practical to work with. Developers would have fits if they were forced to switch to multi-GPU development, much less do it halfway through a generation cycle.

As for DX12, no, it wont just be small projects that dont have DX12. The majority of large games also wont be DX12.

And I didn't forget anything about that interview. It just further reinforces that development towards DX12, and low level API's in general, is still very early, and is not anywhere near the level where you think it is, thinking that most big games will have it going forward. It's going to take a lot longer than you think before that happens.
 
All sounds legit, I'm sure.

All it sounds like to me is someone reeling off a list of sketchy excuses to justify a rather ridiculous decision.

He is losing a few fps in dx11 games, gaining a few fps in dx12 games. And he isn't losing any money with the switch. So why not try something new??

Using the terms insanity and ridiculous is going way over the top to describe his decision.
 
Maybe you've yet to listen to my argument about that, but I've certainly made one a couple times already. Not my fault if you plug your ears and go "NA NA NA NA NA NA".

If I have to do it again though - multi-GPU development is a PAIN. Both Sony and Microsoft have spent a whole lot of effort with the PS4 and XB1 architectures and API's to create something that is easy to work with. Pleasing developers is really a top priority for them because it dictates the willingness to get developers onboard and how many titles are made for them. Sony learned this lesson with the PS3(and the PS2, but less so because it was in the dominant market position for other reasons). In the end, there is no good reason to do multi-GPU when single GPU's are as efficient and powerful as they are already and are 10x more convenient and practical to work with. Developers would have fits if they were forced to switch to multi-GPU development, much less do it halfway through a generation cycle.

As for DX12, no, it wont just be small projects that dont have DX12. The majority of large games also wont be DX12.

And I didn't forget anything about that interview. It just further reinforces that development towards DX12, and low level API's in general, is still very early, and is not anywhere near the level where you think it is, thinking that most big games will have it going forward. It's going to take a lot longer than you think before that happens.

you still dont explain why mgpu is a pain on console, i am not talking about the PC, what is hard for a dev having close to the metal access of the hardware, to have their engine designed for mgpu instead of single gpu ?
is it like technical problems that they cannot get past it ? or significantly more time and resources for them to design the engine for mgpu instead of single ?
and about dx12 i would rather see it fail, and have vulkan instead, but the fact of the matter is, it has taken off from couple months back, everyone seem to be getting on board, and the guy on the article is talking about complete migration of devs to dx12 exclusive, which ofc wont happen yet untill win10 market share gets around 70%, but hybrid games it already started, and that have been made clear in E3.
just for fun name 10 biggest triple A game anounced at E3, and to be released on PC.
then check how many have DX12.
 
GTX 780 to RX 480. Madness? Not worth it? Good move?

I've got the upgrade bug, and would like to go superwide with Freesync or G-sync. Whilst Im at it I want a new GPU.

This is much more worth while than going from a GTX 980 to RX480 (which isn't an upgrade in the slightest - I did it myself and regretted it with the reference one at least).

I would 100% wait for a custom cooled card though, or put in a pre-order for the Sapphire Nitro.

Again though, we have no idea if it will clock very high despite having an 8 pin, these cards seem unresponsive to much tweaking on the GPU itself although the ram clocked up nicely on my one.
 
Gotta say i bought a Freesync screen recently, went from 24" 1080p 60hz to 27" 1440p 144hz, and although i now get less FPS in games, it actually feels 10000x better, playing the division is awesome, its just so so smooth.

Playing other stuff too like Diablo3, Grim Dawn, Elder Scrolls Online, WoW, its just a different experience in how fluid it feels, its kinda hard to describe.

At one point i was tempted to go G-Sync and an Nvidia card, but i done some research and theres pretty much zero difference between the 2 techs other than price, and Freesync ranges can be a little tighter.

Im running a 290, and i tweaked a few settings in Division i average 85fps, which is awesome, there is practically zero difference in onscreen image either.

If the custom 480s can outperform my 290 i may well be tempted to switch to one, purely for the little gain in performance, Wattman tweaking, better DX12 support etc. Infact i could be tempted to buy 2 of them to max out my screen on games that support Xfire, i plan on switching to big Vega when it comes but i reckon the resale on the custom cards will still be decent in 6 months or so time.
 
For all the talk about per efficiency from amd, it really does seem like they got spanked hard this time by Nvidia.

To be honest, I don't think Nvidia went out of their way to "Spank" AMD on the power efficiency. They just carried on with their efficiency gained from Maxwell. AMD have come on leaps and bounds since the 290 for power efficiency but have not yet caught up to Nvidia in that area.

Not quite sure how the Powergate issue happened with the RX480 Reference but that was AMD dropping the ball on their own toes (Yet again, I'm afraid :rolleyes:).

They seem to have got it under control with the driver fix from the few reports I have heard, so well done to them for that. However, it would have been better to have sent those cards out configured properly in the first place. We all know that issues like this do tarnish cards launches, as it will provide ammo for Nvidia users to use for decades to come, as they have been doing with the "Bad Drivers" moniker for years...even though we all know that is no longer the case.

I do feel that AMD are still searching for that 'Big Win' card without any hiccups to restore some confidence in their products with people outside of the Team Red circle. So we look forward to Vega for that and keep our fingers crossed. :D
 
you still dont explain why mgpu is a pain on console, i am not talking about the PC, what is hard for a dev having close to the metal access of the hardware, to have their engine designed for mgpu instead of single gpu ?
is it like technical problems that they cannot get past it ? or significantly more time and resources for them to design the engine for mgpu instead of single ?
and about dx12 i would rather see it fail, and have vulkan instead, but the fact of the matter is, it has taken off from couple months back, everyone seem to be getting on board, and the guy on the article is talking about complete migration of devs to dx12 exclusive, which ofc wont happen yet untill win10 market share gets around 70%, but hybrid games it already started, and that have been made clear in E3.
just for fun name 10 biggest triple A game anounced at E3, and to be released on PC.
then check how many have DX12.
Managing graphics tasks to two different GPU's is the problem. It's not that it cant be done, it's that it becomes a lot more difficult and requires more effort to be done right. And as scaling never works in a reliable manner due to bottlenecks, it's not necessarily the most efficient route anyways.

And Vulkan is even farther behind than DX12.

If you want a list of upcoming big PC games that wont use DX12:

Call of Duty: Infinite Warfare
Civilization VI
Dishonored 2
FIFA 17
For Honor
Mafia 3
Mass Effect Andromeda
Prey
Resident Evil 7
Shadow Warrior 2
South Park: The Fractured But Whole
Tekken 7
Titanfall 2
Ghost Recon: Wildlands
No Man's Sky

I'm sure there's more.

I mean, I really think the announcement of all those Xbox titles coming to PC really kind of twisted the picture a bit in terms of the amount of support DX12 is getting in the big picture.
 
Last edited:
They finally added Async Compute to ROTRR:

http://steamcommunity.com/app/391220/discussions/0/358417008720278827/

This patch includes the following improvements:

- Adds DirectX12 Multi-GPU support. Many of you have requested this, so DirectX 12 now supports NVIDIA SLI and AMD CrossFireX configurations.
The benefits of DirectX 12 are still the same, but now you are less likely to be GPU bottlenecked and can reach higher framerates due to the improved CPU utilization DirectX 12 offers.
- Adds utilization of DirectX 12 Asynchronous Compute, on AMD GCN 1.1 GPUs and NVIDIA Pascal-based GPUs, for improved GPU performance.

Those conspiracy theories about it being removed from the PC port because it's a Nvidia sponsored game were probably true then. The Xbox One version had it from the beginning.
 
Those conspiracy theories about it being removed from the PC port because it's a Nvidia sponsored game were probably true then. The Xbox One version had it from the beginning.
But why would it get added now, then?

I imagine if it was removed at all, it would be for technical reasons, not conspiracy reasons. Async compute is not just some plug and play code. And getting it to work with varying hardware on PC is much different than getting it to work with a single closed and fixed system. This has probably been something that's taken some time for them to get implemented.
 
But why would it get added now, then?

I imagine if it was removed at all, it would be for technical reasons, not conspiracy reasons. Async compute is not just some plug and play code. And getting it to work with varying hardware on PC is much different than getting it to work with a single closed and fixed system. This has probably been something that's taken some time for them to get implemented.

Is it a coincidence then the GTX 1000 series is supposedly good with Async Compute? Kind of fits in with why it's been added now. A Nvidia sponsored game ;)
 
Considering how big this thread is and how much attention the 480 got, I would have thought there would be more than a handful of owners in the 480 owners thread.

Seems to me everyone felt let down by the 480. I certainly was. Going to 14nm I was expecting more from this card, not to mentioned new architecture. And before someone says this was the 380 replacement, yes I know..

I really hope AMD does a much better job with Vega. They will have no excuses, as it will be developed primarily for desktops and for users like us.
 
Back
Top Bottom