• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Fury memory interface 4096 bit

Fury X reminds me of the HD2900XT. They strapped a massive memory subsystem to that in the hopes that it would make up for the woefully inadequate core. We all know how that turned out. :rolleyes:

Feel sorry for AMD. Remember when the HD4870 came out, it was a genuinely awesome card, totally caught nvidia off guard. Then they went on a roll for few generations, in particular the HD5870 and 7970 which were actual flagship cards, coming out well ahead of nvidias offerings. Now they are just about managing to play catch up.

The 2900XT story had a lot more behind it than just that though. Microsoft changed the DX10 spec due to Nvidia not having a card ready for the launch of Vista so Microsoft allowed Nvidia to do hardware based AA so AMD were punished immensely for doing what was asked. It then birthed the 4800 series which had hardware to do the AA and then AMD were back on track for a while.

7970 was behind the 680 until Never Settle drivers but yes I agree with the rest.
 
New tech memory + old tech gpu = not taking full advantage of it. Problem is, Nvidia will probably bring it all together in the long run and kill AMD off.


Now just imagine if we had some HBM on a Intel CPU.
 
The Fury x is CPU starved. It shows when you compare the 4K to 1080P results. it should show far more grunt at lower resolutions.

There are already better windows 10 drivers that give a good 5 - 20% boost in performance across the board.

You can try them yourself, but they are modded and buggy on ocassion. go to Guru 3D and try the June8th 15.200.1040 drivers.

Many people are getting a nice boost in performance in every directx 11 game. even on 7000 series cards.

You experimentally see the difference when you do the API overhead test.
 
Depends on the architecture. 680 was faster than the cards before it yet some of those earlier cards were 384 bit and the 680 was 256 bit. Also think one or both companies like to occassionally impress with their specs (a higher number must be better right?) although do feel Nvidia tend to release more optimal solutions for a particular market/resolution a card is aimed at - just an opinion tho not a fact.
 
Last edited:
Are you suspecting that even just switching to the DX12 api on Win10 (but still running DX11 games) will bring improvement? Serious question.

I only ask as I feel that their's a lot of untapped potential within the GCN architecture...

Sure windows 10 dx11 will show gains. They is already gains being shown now on leaked driver builds.
 
With all this bandwidth why is performance not better or something holding it back?

If you overclock the memory on a 290X or 980TI you don't get much more performance at all, because the cards are simply not bottle necked by bandwidth yet. HBM is a great technology but not needed with this generation of cards.

I expect AMD were better on 20nm being available and a whole new architecture for the Fury but with the 20nm fiasco they had to cut back their plans massively. They stuck with the HBM concept and crammed as much shaders as they could in the 600mm HBM interposer restriction. TMUS and ROPS weren't increased, I expect on 16mm there will be a big bumb on these and they will really enjoy the exrta bandwidth

Moreover, if you look at the bandwidth increase relative to shader number increase the proportion is about the same.
 
Because of the processing order, on DX12 the whole AMD "disbility" "poor performance" vs Nv superiority will be turned, this card developed for Win10, DX12, Vulkan, for a new Era in the PC gaming, i hope will Rock! :)
 
Because of the processing order, on DX12 the whole AMD "disbility" "poor performance" vs Nv superiority will be turned, this card developed for Win10, DX12, Vulkan, for a new Era in the PC gaming, i hope will Rock! :)

Where do people get this delusional rubbish from?
 
from all the benches i rember seeing the 290x seems to gain a lot of ground on nvidia gpus when it gets to higher resolutions. so this bigger bus might have something to look forward to.
 
16nm will enable rapid increase in GPU cores - that's where HMB will shine, especially with lower (compared to GDDR5) power usage, so more cores can be stuffed onto same die.
 
Are you suspecting that even just switching to the DX12 api on Win10 (but still running DX11 games) will bring improvement? Serious question.

I only ask as I feel that their's a lot of untapped potential within the GCN architecture...

Running DX11 game in Win10 is roughly 20-30% faster than in Win8.1 even on same leaked drivers. just because of WDDM 2.0. This was shown very well in project cars benchmark where they used different driver versions on Win 8.1 and Win 10. I don't know if WDDM 2.0 bring any benefits to situations where you are already gpu bound. But to get to nvidia's dx11 level, it ain't enough in all games to just rely on WDDM 2.0. They need to bring their overhead down.

Look at the gpu usages in this video. 280x roughly 65% gpu usage under win10.

https://www.youtube.com/watch?v=IpATnpx45BI
 

IvMnVnqt.jpg


That's not how that meme works... sorry. :o
 
i really hope fury gets optimised and copmpete whit top nvidia cards, we all win! prices go down to match the competition
 
i really hope fury gets optimised and copmpete whit top nvidia cards, we all win! prices go down to match the competition

The Fury X is far from optimised but is already competing with NVidia cards @2160p.

Once optimised the Fury X clock for clock should be at least 45% faster than a 290X clock for clock and probably more @2160p.
 
The thing with AMD is that it's a bit of a gamble, it can turn out that the card is more than a bit better than Nvidia counterpart after a few months (of optimization) but you still can't be sure. So go for the bird in the hand or.. ? It depends on your risk-tolerance.
 
It seems a bit of a crapshoot when you essentially buy an AMD card to beta test it for 6 months while they figure out their drivers though, no?

Could enjoy a 980 Ti now, and then 6 months down the line start looking forward to Pascal instead, or the next AMD card.
 
Pretty sure that the HBM is in large due to LiquidVR which AMD have a lot of interest in moving forward.

Multi GPU rendering on on the API level, to me this says no more crappy Crossfire/SLI drivers, or a lot less patching at the very least. DX12 also important for VR apparently.
 
Back
Top Bottom