• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Working On An Entire Range of HBM GPUs To Follow Fiji And Fury Lineup – Has Priority To HBM2 Cap

How much bandwidth does a Fury X have compared to the next best card lol.

If that amount of bandwidth is not doing the job @1080p by a wide margin then it does not take much working out where the problem is !!!!

Clockspeed

Yet the mantle comparison shows it is fine. which you keep ignoring, not a matter that the GM200 cant use mantle. what matters is that the FX is beating itself by 25 - 30 fps at 1080p in mantle. compared to dx11 in the reviews.
 
500Mhz 4096bit HBM has pretty much the same performance as 8Ghz 512bit GDDR5.

CPU performance makes more difference at lower resolutions, Mantle 1080P results can easily be explained by the fact that it's easier on the CPU than DX11.
 
The thing is he felt he got an increase by overclocking the memory or he would not have bothered.:)

I don't doubt he would see an increase if the mem clock is operating at higher frequency. Did he post any runs with the same core clock but a lower memory overclock, then we can really see the contribution. If latency really is an influential factor limiting perf at lower res it should scale quite well.
 
15.7 don't really bring that much new for Fury X from 15.15's. They bring more to older cards. For Fury X their biggest difference is on Win10 (WDDM 2.0).

Which clearly suggests Fiji's 080p issue is not driver related.

The BF4 Amntle and Crossfire data all point to that as well.
 
http://www.eurogamer.net/articles/digitalfoundry-2015-amd-radeon-r9-fury-review
There have been theories that AMD's slower DX11 driver may be to blame for the poor showings at lower resolutions. However, if that were the case, we would expect Fury and Fury X to perform at the same level at 1080p and 1440p as CPU would become the bottleneck rather than the GPU hardware - this does not happen: the top-tier card is still faster and once again, overclocking brings us pretty close to overall parity.

As I have pointed out several times, drivers/DX11 are not the cause the issue
 
Think of the "poo"storm if Nvidia "Has Priority To HBM2 is all I've got to say

AMD helped develop the technology, apparently one of their engineers was working on the stacking side of it for multiple years. So it stands to reason they get first dibs on iterations of it.
 
Remember there's no benefit in the R9 390X having 8GB of memory unless you crossfire it.
.

Sorry if Im covering something that is "ancient" news but not sure where the basis of this comes from

I have a 3440*1440 Dell 34" widescreen monitor so high res usage is very important currently - especially with the more budget cards if possible as I cant afford anything like a 980Ti

The 390 appears to be a good compromise (non-x preferably, but from what your hinting at the x and non-x shouldn't make a difference)
 
Sorry if Im covering something that is "ancient" news but not sure where the basis of this comes from

I have a 3440*1440 Dell 34" widescreen monitor so high res usage is very important currently - especially with the more budget cards if possible as I cant afford anything like a 980Ti

The 390 appears to be a good compromise (non-x preferably, but from what your hinting at the x and non-x shouldn't make a difference)

you still won't be able to run settings that really need more than 4GB on a single 390X, turning the settings up to max at 3440x1440 will be a slide show regardless of 8GB vs a similarly clocked 290X with 4GB
 
Which clearly suggests Fiji's 080p issue is not driver related.

The BF4 Amntle and Crossfire data all point to that as well.

BF4 doesn't have GCN 1.2 path at all on mantle.

Also how does driver version prove that it's not driver issue. Fiji didn't get 30% drawcall boost from 15.15 to 15.7, while cards that came from 15.6 (or previous) got that boost. And they gained very significant upgrades in cpu heavy games.
 
We are still talking about data that is being sent at near the speed of light

*raises finger* Actually, electrons flowing through silicon have an electron mobility of 1400cm2 per volt second and the electric field pushing them around only has a propagation velocity of a fraction of c.
V0aVqyF.gif
 
*raises finger* Actually, electrons flowing through silicon have an electron mobility of 1400cm2 per volt second and the electric field pushing them around only has a propagation velocity of a fraction of c.

Yes, electrons move very slow in comparison to the energy transfered down a wire. But the signal/energy in a wire propagates faster than the movement of the electrons and moves at 50 - 99% the speed of light depending on materials.

But i was also talking about the latency of the signal between memory chips and memory controller. :P
 
Last edited:
you still won't be able to run settings that really need more than 4GB on a single 390X, turning the settings up to max at 3440x1440 will be a slide show regardless of 8GB vs a similarly clocked 290X with 4GB

well I get better performance from the 390 than I do from the 970 I took out....
 
In addition to getting all the supply, it seems AMD have patents:

https://www.google.com/patents/WO2014025676A1?cl=en

Cough up Nvidia? :D

AMD doesn't get all supply, AMD would get priority, if you believe AMD's marketing. As it is Hynix who currently sell HBM chips, if Hynix get their production rates up they will be supplying both AMD and nvidia with the required chips and maximizing their profits. Nvidia won't let Pascal go beyond the design stage without a contract in place from Hynix that guarantees supply. And that assume by then inky Hynix is producing HBM, anyone can. Nvidia might silky persuade someone else to produce their memory, e.g. Samsung.


HBM is a JEDEC standard so anyone can manufacture, sell and use HBM without licensing AFAIK.

Of course AMD have patents, they would be stupid not to. The patents protect someone copying the design, making minor adjustments, and then selling new chips with licensing costs. Nvidia aren't going to be ripping off the designs just utilizing the chips.
 
Last edited:
Yep, HBM is an open standard.
https://www.jedec.org/standards-documents/docs/jesd235


Also a few months ago when Nvidia announced they would move from HMC to HBM, that would be because they had a supply contract in place with Hynix. Although it is interesting that HmC is progressing as well, some intel products might appear soon with HMC. So clearly Nvidia had a choice of memory tech.
 
Yep, HBM is an open standard.
https://www.jedec.org/standards-documents/docs/jesd235


Also a few months ago when Nvidia announced they would move from HMC to HBM, that would be because they had a supply contract in place with Hynix. Although it is interesting that HmC is progressing as well, some intel products might appear soon with HMC. So clearly Nvidia had a choice of memory tech.

HMC is currently more expensive because of it having a more complex interface in comparison to HBM.

HBM uses a DDR parallel interface which directly connects the memory to the memory controller.

HMC uses a memory controller on either end and a serial link in between to reduce the number of required traces. using a system known as Serdes. It allows the memory to be placed off package at a similar distance to current ram slots.

But i think HBM has a similar option since the base layer can be substituted with a memory controller. So it will be interesting to see if they make HBM system memory in future. although it would require the cpu to support it etc.
 
All this talk of NVidia wont let go beyond the design stage and they have a choice of HBM or HMC, AMD getting all the stock of HBM ETC.
If Pascal did indeed tape out last month then all these factors have been sorted, because you don't go ahead spending millions on research and development, design, and prototype manufacture, if you haven't decided on what memory type and where your getting it from. Apart from the fact that HBM and HMC would require different components on the chip itself and the design is obviously pretty much final as it suposedly taped out last month. :)
 
All this talk of NVidia wont let go beyond the design stage and they have a choice of HBM or HMC, AMD getting all the stock of HBM ETC.
If Pascal did indeed tape out last month then all these factors have been sorted, because you don't go ahead spending millions on research and development, design, and prototype manufacture, if you haven't decided on what memory type and where your getting it from. Apart from the fact that HBM and HMC would require different components on the chip itself and the design is obviously pretty much final as it suposedly taped out last month. :)

That is exactly my point. Nvidia taped out Pascal last month, a long time before that Nvidia would have made contracts with a supplier for it memory technology. Clearly Nvidia announced they would used HBM, so they have a contract with Hynix (or someone else), to supply adequate chips.
 
Back
Top Bottom