• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will we see low-end gpu's with DDR4 memory instead of DDR3?

Funny you mention the Titan X, have you noticed how a TX or 980 Ti@1080p will totally thrash a Fury X when both cards are @stock.

What is the difference, one uses slow clocked HBM and the other fast clocked DDR5.

Nice way to divert from and ignore what i said. No point in carrying on the conversation with you.

And also a hint on the TX and 980, they are far less cpu starved than the Fury X.
 
Problem has nothing to do with the cores but is entirely about the memory because HBM sucks.... but a 290x with gddr5 is more competitive with a 980 gtx at high resolution than at 1080p.... that card ALSO scales better at a higher resolution.

Here's a hint, you can load up shaders much more efficiently the more work you ask them to do. At lower resolution cores are being less efficiently utilised and this effect can be seen on many AMD cards which have often provided more based grunt(shader power) and less front end power. Nvidia often has had more rops or higher clocked rops which helps significantly when you aren't shader limited but they have also lose ground at higher resolutions. Architectures get balanced, the balance can be achieved where ever you like it. AMD made a card with ridiculous bandwidth and loaded it up with shaders and less rops for a higher resolution aimed at card.

290x was also designed with that in mind, the 7970 vs 680gtx also had that in mind. AMD aimed the card at higher resolution with more bandwidth and the card was stronger than the 680gtx the higher you went in resolution.

Epic fail

My 290X can give a Fury X a good run for it's money @1080p. Admittedly the 290X is a better overclocker but the point is the card with the much bigger core is using HBM.
 
And what does the GPU use to talk to the CPU, the memory system lol.

Wrong, the cpu talks to the GPU's logic controller which then sends work to the cores or data to the memory.

But as i said before, it can all be tested once we have a Directx12 benchmark.
 
Last edited:
AMD have always been stronger at higher res and/or lashings of AA, the fact that the Fury-X catches up with the 980TI @ 2160P is simply a trait AMD GPU's have right across the board and have done for a while, they simply don't come into their own until it gets tough.
It was the same with the HD 6###, HD 7###, Hawaii, i can see it in action myself, the 970 is good at 1080P but at 1440P you can almost feel it wincing, the pain the pain make it stop..... while the 290 just rolled on like an unstoppable fright train.

To take your motorway concept, think of it like this.

Nvidia are a GTI Hot Hatch, fast, nimble and efficient...

AMD are more like a Muscle Car, big, heavy and inefficient but very powerful.

The little Hot Hatch runs rings round the Muscle Car on a flat track, get to a hill and the Muscle Car with its huge torque gets up the hill like its not even there while the Hot Hatch gets bogged down and slows.
 
Last edited:
I think reasonable amount of that humbug comes from the fact that AMD drivers have higher overhead. So at lower resolutions most of their cards are under utilised and CPU starved.

Since as everyone likes to state. That at higher resolutions you are more GPU bound. so most AMD cards tended to shine at higher resolutions and settings because they can make use of their hardware.
 
Epic fail

My 290X can give a Fury X a good run for it's money @1080p. Admittedly the 290X is a better overclocker but the point is the card with the much bigger core is using HBM.

Epic fail, only in your reading ability... again.

Re-read what I said rather than what you think I said. YOu can balance ANY card WHEREVER you want it. I didn't say anywhere that the 290x/Fury had the same design balance, they don't.

The have the SAME NUMBER OF ROPS, they don't follow the same design path. You could likely take almost any given architecture, increase rop/shader ratio, improve top FPS, improve averages as a result but lose in minimum FPS and have lower high res performance.


You've decided based on specious reasoning that the ONLY possibility is memory, while entirely ignoring the architectural differences because it suits your argument, you also compare it to Nvidia cards while claiming the ONLY difference is memory.... which is categorically false. While you keep making false claims I'm going to ignore your silly conclusions based off them.
 
I think reasonable amount of that humbug comes from the fact that AMD drivers have higher overhead. So at lower resolutions most of their cards are under utilised and CPU starved.

Since as everyone likes to state. That at higher resolutions you are more GPU bound. so most AMD cards tended to shine at higher resolutions and settings because they can make use of their hardware.

Yes i think some of it probably does, but the 970 isn't as comfortable with very high workloads as the 290.
 
I think reasonable amount of that humbug comes from the fact that AMD drivers have higher overhead. So at lower resolutions most of their cards are under utilised and CPU starved.

Since as everyone likes to state. That at higher resolutions you are more GPU bound. so most AMD cards tended to shine at higher resolutions and settings because they can make use of their hardware.

Total rubbish !!!

Most of the market is @1080p, I would like to think that AMD are not stupid and write their drivers to get the best out of their cards at the resolution.
 
Epic fail, only in your reading ability... again.

Re-read what I said rather than what you think I said. YOu can balance ANY card WHEREVER you want it. I didn't say anywhere that the 290x/Fury had the same design balance, they don't.

The have the SAME NUMBER OF ROPS, they don't follow the same design path. You could likely take almost any given architecture, increase rop/shader ratio, improve top FPS, improve averages as a result but lose in minimum FPS and have lower high res performance.


You've decided based on specious reasoning that the ONLY possibility is memory, while entirely ignoring the architectural differences because it suits your argument, you also compare it to Nvidia cards while claiming the ONLY difference is memory.... which is categorically false. While you keep making false claims I'm going to ignore your silly conclusions based off them.

I would like to think that AMD balance their cards to work well @1080p, if they are not doing this they are ignoring most of the market.

Also nothing wrong with my reading ability as I notice from your posts before the arrival of HBM that you got an awful lot wrong. For example the 1080p performnce !!!!

I am waiting for HBM2 to see if 1080p performnce increase with a clockspeed increase on the memory, I bet it does.
 
Total rubbish !!!
Most of the market is @1080p, I would like to think that AMD are not stupid and write their drivers to get the best out of their cards at the resolution.

You are starting to make yourself look daft now.

People are always bashing AMD about their driver overhead causing lower performance. Now you are just going on the opposite tangent to make your own point look more valid. It is known that lower resolutions become more cpu bound as the GPU cores have less work to do and need to be fed faster to produce higher framerates.

And the reductions in driver overhead in the past few driver updates have shown increases in performance at 1080p on the majority of GCN cards. Especially Hawaii based cards.

I am waiting for HBM2 to see if 1080p performnce increase with a clockspeed increase on the memory, I bet it does.

And when HBM2 drops the devices using it will be running different core architectures anyway. So there is no direct comparison to tell if it is the HBM's frequency causing the issue, but it is unlikely.

The only way it could be compared is if they released a fiji chip with HBM2. Which would be unlikely. I believe from something i read that they are refreshing with 14nm finfet parts and HBM2 across all product ranges.

But as i mentioned earlier. Let's see how things go when the DX12 benchmarks roll in.
 
Last edited:
You are starting to make yourself look daft now.

People are always bashing AMD about their driver overhead causing lower performance. Now you are just going on the opposite tangent to make your own point look more valid. It is known that lower resolutions become more cpu bound as the GPU cores have less work to do and need to be fed faster to produce higher framerates.

And the reductions in driver overhead in the past few driver updates have shown increases in performance at 1080p on the majority of GCN cards. Especially Hawaii based cards.



And when HBM2 drops the devices using it will be running different core architectures anyway. So there is no direct comparison to tell if it is the HBM's frequency causing the issue, but it is unlikely.

The only way it could be compared is if they released a fiji chip with HBM2. Which would be unlikely. I believe from something i read that they are refreshing with 14nm finfet parts and HBM2 across all product ranges.

But as i mentioned earlier. Let's see how things go when the DX12 benchmarks roll in.

I am not making myself look daft at all, every Fury X review will show that the performance is bad @1080p. This is a fact that is not going to go away.

I also think you are clutching at straws hoping that DX12 will come to the rescue, remember the TitanX is not only the fastest NVidia card it also has a higher level of DX12 compatibility than anything AMD are producing at the monent.

Almost forgot have you or DM even used a Fiji based card ?
 
Last edited:
I am not making myself look daft at all, every Fury X review will show that the performance is bad @1080p. This is a fact that is not going to go away.

I also think you are clutching at straws hoping that DX12 will come to the rescue, remember the TitanX is not only the fastest NVidia card it also has a higher level of DX12 compatibility than anything AMD are producing at the monent.

Almost forgot have you or DM even used a Fiji based card ?

How did this turn into Nvidia are better at something than AMD, yet again. Every single thread in this sub forum....
 
How did this turn into Nvidia are better at something than AMD, yet again. Every single thread in this sub forum....

Right at the point where people would not accept that the Fiji cards 1080p performance is below par.

What is interesting is what you said about the GTX 970 being poor at higher resolutions, this is something I said when the GTX 970 and 980 first came out. I also said it was to do with their memory/bus setup which got most of the NVidia guys upset as they thought it would be fixed with newer drivers, something that I said was never going to happen as the drivers work the same at all resolutions. Many months after the launch of these 2 cards they still have the problem.

Along similar lines even though things are reversed I don't think things will change much for the Fiji cards, they will remain very good at high resolution and poor @1080p as again I think it is a hardware problem with the memory/bus but the other way round from the 970/80.
 
Right at the point where people would not accept that the Fiji cards 1080p performance is below par.

What is interesting is what you said about the GTX 970 being poor at higher resolutions, this is something I said when the GTX 970 and 980 first came out. I also said it was to do with their memory/bus setup which got most of the NVidia guys upset as they thought it would be fixed with newer drivers, something that I said was never going to happen as the drivers work the same at all resolutions. Many months after the launch of these 2 cards they still have the problem.

Along similar lines even though things are reversed I don't think things will change much for the Fiji cards, they will remain very good at high resolution and poor @1080p as again I think it is a hardware problem with the memory/bus but the other way round from the 970/80.

By that argument give that Fiji performs much better at 2160P relative to 1080P then there is nothing wrong with its Memory. :)
 
By that argument give that Fiji performs much better at 2160P relative to 1080P then there is nothing wrong with its Memory. :)

It is the way the memory is setup that is the problem for both GTX 970/80 and Fury X.

1080p works best with high clockspeeds as there are lots of fps and not much information in them and 2160p is better with a wide bus where the fps are a lot lower but there is 4x the information in them.

If you use a GM200 card it is a good compromise as it has the high memory clocks for 1080p and also a reasonably wide 384 bit bus for 2160p. Having said that it still does not perform as well as the 512bit bus on a 290X or the HBM on a Fury X at high resolution.
 
It is the way the memory is setup that is the problem for both GTX 970/80 and Fury X.

1080p works best with high clockspeeds as there are lots of fps and not much information in them and 2160p is better with a wide bus where the fps are a lot lower but there is 4x the information in them.

If you use a GM200 card it is a good compromise as it has the high memory clocks for 1080p and also a reasonably wide 384 bit bus for 2160p. Having said that it still does not perform as well as the 512bit bus on a 290X or the HBM on a Fury X at high resolution.

Does increasing Fury's Memory clock make much difference?

Overclock it by 10% (550Mhz) whats the gain?
 
The memory system does not work the way kaap. You either go wide with lower frequency or thin and high frequency. The internal frequency of the Gddr5 memory is far lower than the interface frequency that is shown as the rams frequency.

All that happens is that more data is compacted into one line cycle from multiple columns in a GDDR5 chip and sent to gpu. Where as in HBM the memory is directly read from different columns in a chip over multiple lines. You still send the same amount of information in the end.

The above with Gddr5 is no different to how ddr system memory works.

increasing the memory clock alone had a tiny effect when people benchmarked games. even though at 20% the overclock Is relatively large.

And waiting on dx 12 is not clutching, since the furyx has no problem matching the titan x in thief when using mantle. Even with its mantle support being flaky. And the furyx was around 20-40% slower in DirectX 11 in the same game.

And I am going on about HBM not being the problem alike how kaap seems to think, my points have not been AMD vs nvidia.
 
Last edited:
Does increasing Fury's Memory clock make much difference?

Overclock it by 10% (550Mhz) whats the gain?

I don't know the exact gain as I have not tried it myself but there is a gain. A few people on the bench threads have posted scores using a memory overclock.

The other draw back to doing it is there is not much extra you can get, going to 550mhz seems about the norm.

If you have a go on your GTX 970 it is a bit different as you can go from 1752mhz to over 2100mhz on the memory if you have a good one. If you try it though don't expect a massive increase in performance.
 
The memory system does not work the way kaap. You either go wide with lower frequency or thin and high frequency. The internal frequency of the Gddr5 memory is far lower than the interface frequency that is shown as the rams frequency.

All that happens is that more data is compacted into one line cycle from multiple columns in a GDDR5 chip and sent to gpu. Where as in HBM the memory is directly read from different columns in a chip over multiple lines. You still send the same amount of information in the end.

The above with Gddr5 is no different to how ddr system memory works.

increasing the memory clock alone had a tiny effect when people benchmarked games. even though at 20% the overclock Is relatively large.

And waiting on dx 12 is not clutching, since the furyx has no problem matching the titan x in thief when using mantle. Even with its mantle support being flaky. And the furyx was around 20-40% slower in DirectX 11 in the same game.

And I am going on about HBM not being the problem alike how kaap seems to think, my points have not been AMD vs nvidia.

Remember with graphics cards they produce one frame at a time and then move on to the next. What that means is each frame produced at 1080p on a Fury X will not come close to using all the bus width available as there is simply not enough data in each frame to do it.

As to the Thief bench I think you have just dug a hole for yourself.:D

Putting aside multi GPU scores as that has more to do with CPU clockspeed and whoever has the fastest one tends to win which leaves us with single GPUs. Perhaps you should check out who has the highest single GPU score @1080p and 2160p, I will give you a clue I was not using Mantle when I did them.:D

I did do some Fury X runs on Thief myself put never posted them as IIRC they were a bit embarrassed by the 290X scores already on there. Hopefully with the latest drivers this should show some improvement.
 
Back
Top Bottom