• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

"Overclocks Dream" The Fury X

yeh i understand its a lot faster at 4k
it has more of everything tho!? not just memory bandwidth
maybe im not geeky enuff to understand it lol id like to tho
 
I think you're overanalyzing an offhand comment made by someone in the process of exiting stage left. Huddy said it had 4 kilobits of RAM in that same moment. Are we going to hold him to that?
 
I think you're overanalyzing an offhand comment made by someone in the process of exiting stage left. Huddy said it had 4 kilobits of RAM in that same moment. Are we going to hold him to that?

That's not the same thing.
Huddy made a mistake in what he said, not the meaning.

Should we not listen to anything that AMD engineers say about products then?
 
yeh i understand its a lot faster at 4k
it has more of everything tho!? not just memory bandwidth
maybe im not geeky enuff to understand it lol id like to tho

You can't really get a clear picture of cards true potential on lower resolutions without any graphs from gpu or cpu usages. Anandtech stated that Fury(s) are often cpu bottlenecked at lower resolutions (of course it depends of game in use aswell). That's why there's barely much difference against 290x/390x.

One graph from gpu usage shouldn't be that hard to do.
 
You can't really get a clear picture of cards true potential on lower resolutions without any graphs from gpu or cpu usages. Anandtech stated that Fury(s) are often cpu bottlenecked at lower resolutions (of course it depends of game in use aswell). That's why there's barely much difference against 290x/390x.

One graph from gpu usage shouldn't be that hard to do.

But the 980Ti isn't suffering from this "CPU bottleneck"...
 
All cards launch without voltage control. AMD & NV both leave it up to 3rd party community tools.

Why are AMD having to suffer this ignorance?

I understand what your saying, but for example the ASUS Fury Strix has a completely custom PCB, with completely custom power circuitry, their own GPU tweak II software and yet they still cannot give voltage control. So it must be down to the GPU/HBM/Interposer package limiting it, which puts it squarely in AMD's court.
 
But the 980Ti isn't suffering from this "CPU bottleneck"...

Not in same extent. They do get 2times better drawcall performance in 3dmark api test, so they get cpu capped at higher fps than AMD does.

Project Cars developer on guru3d had some good insights of driver performance from both teams.

http://forums.guru3d.com/showpost.php?p=5116716&postcount=901

In Project Cars the range of draw calls per frame varies from around 5-6000 with everything at low up-to 12-13000 with everything at Ultra. Depending on the single threaded performance of your CPU there will be a limit of the amount of draw calls that can be consumed and as I mentioned above, once that is exceeded GPU usage starts to reduce. On AMD/Windows 10 this threshold is much higher which is why you can run with higher settings without FPS loss.

So, on my [email protected] the NVIDIA (Titan X) driver can consume around 11,000 draw-calls with our DX11 API call mix - the same Windows 7 System with a 290x and the AMD driver is CPU limited at around 7000 draw-calls : On Windows 10 AMD is somewhere around 8500 draw-calls before the limit is reached (I can't be exact since my Windows 10 box runs on a 3.5ghz 6Core i7)

I recommend reading all his posts in that thread.
 
Last edited:
Can we please steer away from 1080p performance discussion :)

Orangey> Perhaps we are reading too much into it. Maybe its the case of an excited engineer giving more information than the business expected?
 
Regarding the BS statement by some AMD guy, unwinder creater of msi afterburner reckons overclocking will still suck with voltage unlocked.



That's both positive and negative info. Positive: device 30h in the dump is IR3567B with no doubts, so driver-level I2C access is indeed working and VRM can be accessed on software level. Negative: such reaction on overvolting (graphics card downclocking) smells by hitting some hardware limit, I'm not too optimistic on improving it.
 
This a draw calls test on latest drivers on my system.
http://www.3dmark.com/aot/42647

People need to forget about the furyx and how it runs at 1080p, this GPU wasn't aimed at being a 1080p card.. It's designed for higher resolutions.. If you buy this for 1080p you doing it wrong.

They is a good reason why it runs better the higher the resolution becomes, it's high bandwidth memory performs better the more demanding it's resolution is.

It's a GPU architecture for the future, stop looking back at the old.

Now what I am really disappointed about is the fact they just isn't any stock... What was the point in giving out early stock with manufacturer issue and the getting that reviewed? Amd didn't do themselves any favours here.

Even if the GPU gets completely fixed they will always be people put off from early review results..
Tbh I would liked if they just waited till it was ready.

The overclocking isn't just down to amd either, we always had to wait for other software to enable this.
 
Basically until we get DX12 games AMD's drivers will suck, until then we can all enjoy no CPU bottleneck with nvidia.

I do hope not, another year or so of friends complaining about their AMD GPU drivers doesn't seem like much fun to me. ;)

More seriously, DirectX 12 is an interesting one.

On the one hand, we want the developers to have more control, to be able to get as close as possible to the metal and squeeze the best performance from the hardware.

Then on the other hand, we want games faster and don't want to have to wait 3-5 years between decent AAA titles.

While on another hand, we want the developers have to work harder to programme every aspect of a game, to make them look as good as possible.

And on yet another hand, we like these optimizations like Mantle, Gameworks, true audio. Although not necessarily the business practices behind them.

Too many hands there :) and going off topic a bit, sorry.
 
This a draw calls test on latest drivers on my system.
http://www.3dmark.com/aot/42647

People need to forget about the furyx and how it runs at 1080p, this GPU wasn't aimed at being a 1080p card.. It's designed for higher resolutions.. If you buy this for 1080p you doing it wrong.

Don't jump all over me but.....

If the card is for higher resolutions and those who bought it for 1080p are doing it wrong.
Why in the dickens did it not come with HDMI 2.0

Also....

How many people own a monitor over 1080p???

If what you are saying is true.... This is quintessentially an Enthusiasts card and not for the average gamer....

Don't buy it. Sorry
 
Don't jump all over me but.....

If the card is for higher resolutions and those who bought it for 1080p are doing it wrong.
Why in the dickens did it not come with HDMI 2.0

Also....

How many people own a monitor over 1080p???

If what you are saying is true.... This is quintessentially an Enthusiasts card and not for the average gamer....

Don't buy it. Sorry

Because its aimed at people running DisplayPort! 1440p and above gaming.

For 1080p they is other GPUs out there in AMDs lineup that will do the job just fine and not cost has much.. If you want better than 1080p gaming then you want FuryX..
People need to stop looking at this GPU for 1080p!!!
 
It's a GPU architecture for the future, stop looking back at the old.

I don't disagree but IMO its a completely pointless exercise bringing it to the market right now - yet again AMD have pushed something before its time, at a probable not insignificant cost to themselves and generated negative PR due to all the issues surrounding that i.e. you can realistically utilise it for many of its strengths due to limitations of technology elsewhere and by the time the tech in general catches up it will be history.

how close the 390 is to the fury in that chart stands out for me
they got to be able to do a lot more with drivers
looking at the specs it shouldnt be that close!?

Often with GPUs as you increase the number of "things" the harder it gets to fully utilise them efficiently - as shankly pointed out its hard to effectively utilise some of the cards strengths at 1080p - as I mentioned before this is one of the reason why GPU manufacturers change the architecture periodically rather than just keep adding bigger numbers to it.
 
It's not aimed at higher resolutions it's just a weaker GPU that happens to do well at 4K probably due to the extra memory bandwidth HBM offers, if it was aimed at 4K it would have had more than 4GB for a start.
 
Because its aimed at people running DisplayPort! 1440p and above gaming.

For 1080p they is other GPUs out there in AMDs lineup that will do the job just fine and not cost has much.. If you want better than 1080p gaming then you want FuryX..
People need to stop looking at this GPU for 1080p!!!

Although you are quite right, this is not the right card for people playing at 1080p, you do have to wonder at the business decision to aim the card at such a small niche market.
I mean with AMD having a small and slowly dwindling market share, you aim your new flagship product at a tiny proportion of that already small market and then to confound things they miss out being able to capture those using 4k TV's instead of monitors.
It is a good card for the target audience, but I'm not sure AMD really thought things through.

Maybe the 1080 performance is beyond AMD's control and as it has been suggested, it is a trade off from using HBM, we wont know until there is another chip with HBM, which of course will probably be next year with HBM 2.
 
It's not aimed at higher resolutions it's just a weaker GPU that happens to do well at 4K probably due to the extra memory bandwidth HBM offers, if it was aimed at 4K it would have had more than 4GB for a start.

Depends how you look at it - within reason it should do well at 4K today - but you'd have to be the blindest of fanboys not to have concerns about how well it would do at 4K tomorrow. If your plan is to jump on the theoretical 8GB version down the line then it might not be a huge concern to you.
 
Back
Top Bottom