• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Possible Radeon 390X / 390 and 380X Spec / Benchmark (do not hotlink images!!!!!!)

Status
Not open for further replies.
That's the understanding most people have, and what I'm also assuming. But some say that the increased bandwidth of HBM means it can somehow use less than a GDDR5 equivalent. The technicalities are way beyond me though.

In simple terms
GDDr5 needs to load textures and copy unneeded information to fill up the ram. HBM removes that limitation of copy as it allows faster load/unload.
if 4gb isnt enough then it invalidate the 970 3.5gb and the 980 card NONE will play games then...
None unless you have a 6gb card.
Unrealistic is the first name and last name is silly.
 
Looking again. Its rubbish.

All its comparing is how fast it can recive and then chuck out data. It doesnt in anyway claim that HBM will hit a VRAM wall later than the same amount of GDDR5 VRAM would.
The only thing that occurs to me is that with GDDR5 perhaps the driver actually duplicates some items in RAM to improve bandwidth and latency, similar to RAID 1. I have no idea whether this is the case or not of course :)
 
if 4gb isnt enough then it invalidate the 970 3.5gb and the 980 card NONE will play games then...

Neither of those cards are being targetted at top end users who run 3-4 cards at 4K resolution and expect 200fps, GTX980 is and always has been a middle-high end card it's just that AMD have been so inactive that NVidia were in a position to charge top end prices for it.
 
Everything that you see passes through the buffer

The buffer holds Texture preloads, as you turn a corner the objects are already there and visible, that's because the GPU has already rendered it and stored it in the buffer.

The bigger the buffer the more of this information it can preload and store, if the buffer isn't big enough to store all the information for every which possible way you could turn it needs to flush the buffer to make room for the image coming into view, that causes latency which manifests its self as a slowdown, lower FPS or in some cases a dramatic slowdown which you would see as a stutter.

The faster the Memory architecture the faster it can flush and preload which results in higher FPS and less stuttering.
 
Last edited:
Everything that you see passes through the buffer

The buffer holds Texture preloads, as you turn a corner the objects are already there and visible, that's because the GPU has already rendered it and stored it in the buffer.

The bigger the buffer the more of this information it can preload and store, if the buffer isn't big enough to store all the information for every which possible way you could turn it needs to flush the buffer to make room for the image coming into view, that causes latency which manifests its self as a slowdown, lower FPS or in some cases a dramatic slowdown which you would see as a stutter.

The faster the Memory architecture the faster it can flush and preload which results in higher FPS and less stuttering.

So in the case of HBM at 4GB you expect to see more frequent flushes however as your bandwidth is higher and latency at a minimum due to proximity to the gpu core you should essentially see smaller negative impact on performance?

If that is true, any games demanding less than 4GB should get a decent boost and stuff using 4GB won't bottleneck so much? Then if you crank it to 8GB you should effectively be ahead again of anything requiring less than 8GB?
 
So in the case of HBM at 4GB you expect to see more frequent flushes however as your bandwidth is higher and latency at a minimum due to proximity to the gpu core you should essentially see smaller negative impact on performance?

If that is true, any games demanding less than 4GB should get a decent boost and stuff using 4GB won't bottleneck so much? Then if you crank it to 8GB you should effectively be ahead again of anything requiring less than 8GB?

Yes, how the faster HBM architecture improves preloading and flushing in an unknown, but certainly having 4GB HBM will result in smoother and possibly higher performance than 4GB GDDR5 (where 4GB is not enough to store everything the engine would like or needs)

GPU Buffers are a very dynamic thing, the more you have the more the engine will use, that doesn't necessarily mean it needs to, if you have a lot it may store the whole map and never flush, having 4GB vs 8GB doesn't necessarily mean the lesser will be flushing all the time, it may not do that at all, its simply that if you have more available then the engine gets greedy.
 
Last edited:
So how big a deal is this Direct3D feature-level 12.1 thing...?

NVIDIA Could Capitalize on AMD Graphics CoreNext Not Supporting Direct3D 12_1

That's another massive load of FUD by btarunr on TechPowerUp.

As one of the posters on TPU posted. The truth is rather different:

656x341xo4wOYxm.png.pagespeed.ic.6BBul3Zp0A.png


NVIDIA can only emulate some DX12 features, whereas GCN supports them natively .. others it can't do at all.
 
Last edited:
humbug do you think that HBM is the sort of thing that developers will be able to code to utilise better, or do you reckon it will be down to the driver development teams.

Could this lead to a situation where the older tech is no longer optimized for just like the move from VLIW4 to GCN and NVidia's previous to Fermi. I suppose even in a similar way that tessellation is being used a lot more and the older techs that don't tessellate quit so well are lagging behind.
 
That's another massive load of FUD by btarunr on TechPowerUp.

As one of the posters on TPU posted. The truth is rather different:

656x341xo4wOYxm.png.pagespeed.ic.6BBul3Zp0A.png


NVIDIA can only emulate some DX12 features, whereas GCN supports them natively .. others it can't do at all.

As you say there is a lot of FUD not only on Techpowerup, but on here as well.

Please do not post up charts like that claiming they are the truth.

Here is another DirectX 12 chart which show completely different results.

DX12-stuff.jpg


Source

We had this discussion about 100 pages ago and both me and humbug posting up charts that differed, we came to the conclusion then, that these charts cannot be believed and we should wait till DirectX 12 is actually here ad then we can see what is what.
 
That's another massive load of FUD by btarunr on TechPowerUp.

As one of the posters on TPU posted. The truth is rather different:

656x341xo4wOYxm.png.pagespeed.ic.6BBul3Zp0A.png


NVIDIA can only emulate some DX12 features, whereas GCN supports them natively .. others it can't do at all.

You claim people are posting fud then do so yourself :rolleyes: That chart you've linked has been picked to pieces by people who actually know what they're talking about and I'm sure you've actually had it pointed out to you previously.

As usual people see what they want to see and let bias get in the way.

Arguing over DX12 capabilities at the moment is about as pointless as anything.
 
As you say there is a lot of FUD not only on Techpowerup, but on here as well.

Please do not post up charts like that claiming they are the truth.

Here is another DirectX 12 chart which show completely different results.

We had this discussion about 100 pages ago and both me and humbug posting up charts that differed, we came to the conclusion then, that these charts cannot be believed and we should wait till DirectX 12 is actually here ad then we can see what is what.

GCN *definitely* does conservative rasterisation, so I trust the chart I posted more than yours.
 
Status
Not open for further replies.
Back
Top Bottom