• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The Fury(X) Fiji Owners Thread

Yeah i'd agree someones been at it with that broken screw , its a shame you wanted so long and got a duff one :(

Hope they get your sorted quickly

Thing is though, the circular seals were still on the box over the tabs, so it hasn't been opened, anyway, not to worry now.

Yeah hopefully, already been a long wait.
 
Last edited:
That line just doesn't work because its just not quite good enough for 4K. Perhaps with a freesync screen.

Arguably there are better options for single card 4K as well and with similar prices.

Where does it say they are solely designed for 4K?

HBM was designed to shift massive chunks of data. It's solely for 4k.

What does a Freesync screen have to do with it? I run a non Freesync screen and I don't have any issues gaming at 4k.
 
It's not designed for 4k it just happens to be weaker at anything lower and the competition drops off a bit more at 4k, if it was designed for 4k it would have 8GB memory, HDMI 2.0 etc. They'd be stupid to target 4k anyway since the market for it is so small.
 
It's not designed for 4k it just happens to be weaker at anything lower and the competition drops off a bit more at 4k, if it was designed for 4k it would have 8GB memory, HDMI 2.0 etc. They'd be stupid to target 4k anyway since the market for it is so small.

HBM was designed for 4k. Or rather, was used for 4k. That's why the card performs so well at 4k and crap and anything less.

Freesync is Displayport only isn't it? or does it work on any HDMI now?

This card was designed for 4k. The only reason people won't accept that is so they can **** it off and have a quick one off the wrist about their 980ti.

And yes, it was stupid to target a market so small but the future is 4k and beyond and once again AMD are making products far too early for their own good. A 8 core Bulldozer was actually as quick as a 2600k when it launched if you ran a very heavily threaded app. The problem was that Windows did not support the cores properly and there were no heavily threaded apps or games (but the ones that were there showed what it could do).

That's AMD all over. They seem to think they have some sort of crystal ball and can predict what people will want in three years time, only to make it now. 64 bit, 8 cored CPUS.. They've been doing it for years and never learn.

So yes, I know all of their shortcomings but honestly any one that thinks Fury is a 1080p card is clearly wrong.

I'm not trying to make AMD look good or bad here I am just stating facts. HBM was developed and used solely for 4k. The fact that people just aren't using 4k yet is another story..

I am, so I went with the AMD because it was £50 cheaper than any 980ti, cooler and quieter than all but the very best of 980ti and they don't cost no £450.

And I run 4k, so it was a good move. It loves 4k ! easily swats aside my old Titan Blacks by a good 5-15%. Not bad, considering I sold them for £30 more than I paid.

I certainly wouldn't pay £600 for one though and I think OCUK are bang out of order. Well, them and Asus who seem to think their stock card is worth more than any one else's somehow.
 
It's not designed for 4k it just happens to be weaker at anything lower and the competition drops off a bit more at 4k, if it was designed for 4k it would have 8GB memory, HDMI 2.0 etc. They'd be stupid to target 4k anyway since the market for it is so small.

If you can find one (just one) instance that is showing 4GB HBM is limiting it at 4k in some way - I may concede that point. But there's nothing anywhere showing the 4GB to be a limiting factor. And nobody uses HDMI 2.0 for a monitor, I'm not even sure monitors have 2.0 yet - a cursory glance at the 4k monitors on OCUK show Display Port is the standard connection for them, with hdmi offerings of 30hz which suggests 1.4.

However, I will agree with you that it's a pretty dumb decision to cater solely to the 4k market since it's so small - but I'm a member of that market so it works out for me ^_^
 
Not for 4k and to be honest any one buying these things running less than 4k really shouldn't be. They were solely designed to be used at 4k.

.

I am not sure thats true and if it is then its a bad design decision on AMDs part, if anything it seems more likely that the high res performance is just a side effect of choosing to use HBM rather than something they intended from the start of design process, its like ' we better use this tech weve been investing in for so long '.

1440p is actually quite popular with monitors like the ROG swift, especially considering they are now coming with the 144hz option, As I've said its also possible to get sub 40fps performance at resolutions below 1440p and thats why you want to get a flagship card even if you are not at 4K, I cant really stand to play a game that is dropping below 45 frames per second and I think if I am paying for the game its a shame not to use the highest graphic options because you are not getting the full experience that the game can offer
 
If you can find one (just one) instance that is showing 4GB HBM is limiting it at 4k in some way - I may concede that point.

SoM is supposed to have some smoothness issues but it's a hog as we all know! Also this;

https://www.youtube.com/watch?v=8hnuj1OZAJs

GTA having some major hitching, or is this fixed now via patch? Software or hardware?

I gotta admit, that GTA vid swung me to definitely decide on a Ti, then I added another for ***** and giggles ;) :D
 
If you can find one (just one) instance that is showing 4GB HBM is limiting it at 4k in some way - I may concede that point. But there's nothing anywhere showing the 4GB to be a limiting factor. And nobody uses HDMI 2.0 for a monitor, I'm not even sure monitors have 2.0 yet - a cursory glance at the 4k monitors on OCUK show Display Port is the standard connection for them, with hdmi offerings of 30hz which suggests 1.4.

However, I will agree with you that it's a pretty dumb decision to cater solely to the 4k market since it's so small - but I'm a member of that market so it works out for me ^_^

When people buy 4K TVs (which is as common as people buying 4K monitors) to use with a PC the only option for 60Hz is usually HDMI 2.0 - a good number would rather have 40+" at that resolution than limited to ~28".

I don't expect to see significant issues from 4GB of VRAM with 4K right now unless you use silly settings like 8x MSAA, etc. but it is cutting it very fine - I can easily load up just below 4GB VRAM (actual) used so potentially could be a concern sooner rather than later. I think AMD is kind of betting the horse on an idealistic vision of DX12/Vulkan uptake and its resource streaming and second guessing the developer to hand optimise VRAM use which is likely going to lead to less than ideal scenarios down the road. Depends on your approach though as to how big an issue that is - if you are just gonna shrug and upgrade to a new GPU when/if it becomes a problem then maybe not a big deal.

I played a bit with 4K on my 780 and sometimes had to sacrifice settings to stay below 3GB so I definitely wouldn't be buying a 4GB card to enjoy 4K personally.

EDIT: The Fury is in a bit of an odd spot in that regard - the core specs and architecture configuration is quite hard to bring all of its power to lower resolutions but it can be loaded up a lot more efficiently when dealing with higher resolutions and more complex scenes, etc. the card is crying out really for sub 28nm and more than 4GB VRAM.
 
Last edited:
SoM is supposed to have some smoothness issues but it's a hog as we all know! Also this;

https://www.youtube.com/watch?v=8hnuj1OZAJs

GTA having some major hitching, or is this fixed now via patch? Software or hardware?

I gotta admit, that GTA vid swung me to definitely decide on a Ti, then I added another for ***** and giggles ;) :D


I think there is also a bigegr issue in the future with DX12. DX12 will allow more draw calls with more unique objects and more unique textures and geometry with less performance impact. VRAM usage will increase quite a lot. 4GB is just enough right now and can only be made an issue by setting custom settings very high to the point the games are only just playable. But that will change with DX12. Worse still is in mutli GPU setups, if 4GB is not a limitig factor because a single furyX doesn't have the grunt then a xfire setup will have the grunt but will hit the VRAm limit.

Of course ina years time when we have some good DX12 games that really show the 4GB limit both AMD and Nvidia new cards will comes out. We are looking at 16GB for GP100 Pascal and 8Gb for middle pascal. Pascal V2 could have 32Gb:eek:
 
When people buy 4K TVs (which is as common as people buying 4K monitors) to use with a PC the only option for 60Hz is usually HDMI 2.0 - a good number would rather have 40+" at that resolution than limited to ~28".

I don't expect to see significant issues from 4GB of VRAM with 4K right now unless you use silly settings like 8x MSAA, etc. but it is cutting it very fine - I can easily load up just below 4GB VRAM (actual) used so potentially could be a concern sooner rather than later. I think AMD is kind of betting the horse on an idealistic vision of DX12/Vulkan uptake and its resource streaming and second guessing the developer to hand optimise VRAM use which is likely going to lead to less than ideal scenarios down the road. Depends on your approach though as to how big an issue that is - if you are just gonna shrug and upgrade to a new GPU when/if it becomes a problem then maybe not a big deal.

I played a bit with 4K on my 780 and sometimes had to sacrifice settings to stay below 3GB so I definitely wouldn't be buying a 4GB card to enjoy 4K personally.

EDIT: The Fury is in a bit of an odd spot in that regard - the core specs and architecture configuration is quite hard to bring all of its power to lower resolutions but it can be loaded up a lot more efficiently when dealing with higher resolutions and more complex scenes, etc. the card is crying out really for sub 28nm and more than 4GB VRAM.

Agreed.

The next gen GCN on 14/16mm should be quite a treat. It also needs to move on from 4 shader engines to 6 shader engines with less compute units per shader engine. Preferably 2 geometry engines per shader engine.

The Fiji architecture is a pixel shading monster but it has proven hard to feed all those shades and other bottles necks get in the way. At 4Kthose bottle necks are less of an issue and the raw pixel shading and high bandwidth are let out of the box.
 
I think there is also a bigegr issue in the future with DX12. DX12 will allow more draw calls with more unique objects and more unique textures and geometry with less performance impact. VRAM usage will increase quite a lot. 4GB is just enough right now and can only be made an issue by setting custom settings very high to the point the games are only just playable. But that will change with DX12. Worse still is in mutli GPU setups, if 4GB is not a limitig factor because a single furyX doesn't have the grunt then a xfire setup will have the grunt but will hit the VRAm limit.

Of course ina years time when we have some good DX12 games that really show the 4GB limit both AMD and Nvidia new cards will comes out. We are looking at 16GB for GP100 Pascal and 8Gb for middle pascal. Pascal V2 could have 32Gb:eek:

Where did you get that from? :p

get the fastest 980Ti you can then.


I expect in 12-18 months time then it would be possible to have a playable game use more than 6GB of Vram at highest setting, but new Nvidia cards will be out in 9 months or so.

I have never understood why it is when it comes to VRAM some people are continuously dogged about X amount being enough for an amount of time somewhere in the distance as if they have a crystal ball.
This even when money is not an object.
As if given the choice between a 12GB TX and 6GB TI one would pick up the TI because it has 6GB and not the TX because it has 12GB, makes no sense what so ever....

Look, there are games out there which benefit from having huge amounts of VRAM, the more the card has the more Pre-loading of textures it can do and thus the smoother and more fluid the performance.

Also, i can make my own Game which will need the 4GB i have available at 1080P and at decent performance levels, as a matter of fact in one of my creations i am starting to worry about how much VRAM its starting to demand.

With DX12 you will get 15x the Draw-Call through-put, what that means is DX12 can call 15x more objects at any given time than the current DX11 can.
More objects = more VRAM.


I'll stick with my advice and leave it at that :)
 
Something I have been saying for a long time.
Like you I also program some game engines. my passion is vast outdoor terrain engines making them as realistic as possible.I'm currently modifyingproland. Terrain engines have interesting problems, the vast view distance entails lot and lots of unique object to be rendered at the same time., and also very high resolution textures. I could easily chew up 4Gb of vram for what I would ideally want to render.
 
Something I have been saying for a long time.
Like you I also program some game engines. my passion is vast outdoor terrain engines making them as realistic as possible.I'm currently modifyingproland. Terrain engines have interesting problems, the vast view distance entails lot and lots of unique object to be rendered at the same time., and also very high resolution textures. I could easily chew up 4Gb of vram for what I would ideally want to render.

Nice :)

2D distant tree rendering http://proland.inrialpes.fr/gallery/trees/image338.png

I could do with some of that to morph into 3D as you get close.

I'm trying to stuff a lot of 3D vegetation into my maps, optimisation is a nightmare, i have a total of 1400 3D Tress here with multiples running on the same Draw Call, but its still upwards of 700,000 Instances calculated.



On this map i have hundreds of thousands of much smaller vegetation object, Mainly collision physics Grass, its a hell of a drain on resource.





This is exactly the sort of thing that DX12 will do a lot better. Folks :)


PS: my CPU is 1.3Ghz underclocked as its AIO cooler is still at RMA
 
Last edited:
Back
Top Bottom