• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will we see low-end gpu's with DDR4 memory instead of DDR3?

Associate
Joined
28 Jan 2007
Posts
1,702
Location
Manchester
Low-end gpus use DDR3 memory and higher priced gpus use GDDR5 memory and are starting to use HBM/HBM2 memory. Will we see DDR4 memory being used on low-end gpus within the next couple of years or so?
 
I wouldn't have thought so, by the time hbm is the norm on the mid to high end the low end will likely just get gddr5 with gddr3 becoming completely obsolete.
 
I wouldn't have thought so, by the time hbm is the norm on the mid to high end the low end will likely just get gddr5 with gddr3 becoming completely obsolete.

GDDR5 is more expensive and uses more power, they won't be using that on their cheapest gpus.
 
Short answer, no, longer answer... nope. :p

Reality is the single area where a low end discrete gpu wins hands down over a igpu is bandwidth. AMD/Intel fastest APUs are dual channel only and offer 40GB/s absolutely maxed out pretty much, most are more in the 25-30GB/s range with non overclocked memory. AMD's lowest end 28nm gpu offers 72GB/s and that is in addition to and thus not taking anything away from the CPU, the 25-30GB/s is shared with cpu needs.

HBM entirely destroys this by offering hugely higher bandwidth and even in it's lowest speed/capacity(one stack) offering almost double what a low end discrete gpu can, with HBM 2 this will be 256GB/s instead.

Another factor is architecture, current architectures use a gddr5/3 memory controller because effectively it's all the same technology and why they can also use ddr3. They can interchange memory used though ddr3 basically stopped being used(though could be) because it's too low bandwidth even for low end gpus now. With a HBM memory controller that compatibility simply isn't there. AMD/Nvidia will be designing architectures top to bottom to use HBM and while we might see gddr5 mem controllers included in some designs for a while(due to capacity/production/other reasons) for a year, in a couple of years it will be a lot of work and cost for almost no reward or reason. So while lets say GCN 1.2 can use a HBM or GDDR 5 memory controller, GCN 2.0 may not work with gddr5 at all and lets say GCN 3 won't at all. Pascal might have some ggdr5 designs but likely whatever is after Pascal won't.

I would think 2 years from now HBM will be available on pretty much anything CPU/APU from AMD(it's a little unclear if AMD will be offering CPU only Zen beyond the first iteration and then if the CPU will still have HBM), all their GPU range. Intel will likely start packaging either HMC or potentially HBM onto their APUs and make low end GPU sales less beneficial in Intel computers as well.

In the next 18-36 months APUs having actually sensible amounts of bandwidth for a GPU will almost certainly finish off the low end GPU segment for good. It's shrunk an enormous amount in the past 3-4 years, but this will signal the absolute end for it.
 
Last edited:
Short answer, no, longer answer... nope. :p

Reality is the single area where a low end discrete gpu wins hands down over a igpu is bandwidth. AMD/Intel fastest APUs are dual channel only and offer 40GB/s absolutely maxed out pretty much, most are more in the 25-30GB/s range with non overclocked memory. AMD's lowest end 28nm gpu offers 72GB/s and that is in addition to and thus not taking anything away from the CPU, the 25-30GB/s is shared with cpu needs.

HBM entirely destroys this by offering hugely higher bandwidth and even in it's lowest speed/capacity(one stack) offering almost double what a low end discrete gpu can, with HBM 2 this will be 256GB/s instead.

Another factor is architecture, current architectures use a gddr5/3 memory controller because effectively it's all the same technology and why they can also use ddr3. They can interchange memory used though ddr3 basically stopped being used(though could be) because it's too low bandwidth even for low end gpus now. With a HBM memory controller that compatibility simply isn't there. AMD/Nvidia will be designing architectures top to bottom to use HBM and while we might see gddr5 mem controllers included in some designs for a while(due to capacity/production/other reasons) for a year, in a couple of years it will be a lot of work and cost for almost no reward or reason. So while lets say GCN 1.2 can use a HBM or GDDR 5 memory controller, GCN 2.0 may not work with gddr5 at all and lets say GCN 3 won't at all. Pascal might have some ggdr5 designs but likely whatever is after Pascal won't.

I would think 2 years from now HBM will be available on pretty much anything CPU/APU from AMD(it's a little unclear if AMD will be offering CPU only Zen beyond the first iteration and then if the CPU will still have HBM), all their GPU range. Intel will likely start packaging either HMC or potentially HBM onto their APUs and make low end GPU sales less beneficial in Intel computers as well.

In the next 18-36 months APUs having actually sensible amounts of bandwidth for a GPU will almost certainly finish off the low end GPU segment for good. It's shrunk an enormous amount in the past 3-4 years, but this will signal the absolute end for it.

Shame HBM absolutely cripples performance @1080p which most low end GPUs use lol.

HBM is also very unreliable too, so far I have had 2 dead cards due to it.
 
Shame HBM absolutely cripples performance @1080p which most low end GPUs use lol.

HBM is also very unreliable too, so far I have had 2 dead cards due to it.

HBM doesn't cripple 1080p performance...

And from what i have seen about. All of AMD's 14nm cards may use HBM.
 
Read any Fury X review and it is quite obvious the 1080p performance is not there.

512GB/s of memory bandwidth is 512GB/s in whatever way its configured, that is the performance of it.

I don't know whats holding it back, its could be a lack of ROP's.
 
Doesn't automatically mean that it is the HBM at 500mhz that is holding it back. Something else is causing the issues.

It is the low clockspeed @1080p holding it back, this is why when you overclock it you get a performance increase.

The problem is nothing to do with the GPU core as it would be there @2160p as well. The problem is the core is getting bottlenecked by the HBM @1080p lol.

DM did not see that one coming lol.
 
512GB/s of memory bandwidth is 512GB/s in whatever way its configured, that is the performance of it.

I don't know whats holding it back, its could be a lack of ROP's.

A 6 lane motorway is a 6 lane motorway but you can only use one of them in a car.:)
 
It is the low clockspeed @1080p holding it back, this is why when you overclock it you get a performance increase.

The problem is nothing to do with the GPU core as it would be there @2160p as well. The problem is the core is getting bottlenecked by the HBM @1080p lol.

DM did not see that one coming lol.

And people have overclocked the ram and seen a minimal increase in performance compared to overclocking the GPU . It is not the HBM causing the problem at lower resolutions.

And you have just reiterated what has been said before. That overclocking increases performance. That is a given. It doesn't mean that the HBM is holding back the GPU at 500mhz.
 
And people have overclocked the ram and seen a minimal increase in performance compared to overclocking the GPU . It is not the HBM causing the problem at lower resolutions.

And you have just reiterated what has been said before. That overclocking increases performance. That is a given. It doesn't mean that the HBM is holding back the GPU at 500mhz.

If all the available bandwidth could be used @1080p, then overclocking would give zero performance increase and possibly even a negative result due to error correction. The fact that overclocking does work means that HBM is holding the performance back.
 
Bandwidth is bandwidth, it's either there or not surely?

You can only use all the bandwidth if the core is generating enough data to full it, you can do that @2160p as the frames are much bigger. @1080p the frames have less data so can not use all the bandwidth but rely on a higher clockspeed to get the job done and keep the framerate up.
 
If all the available bandwidth could be used @1080p, then overclocking would give zero performance increase and possibly even a negative result due to error correction. The fact that overclocking does work means that HBM is holding the performance back.

And i can just as easily say that the ram on the titan x is holding it back until it is overclocked.

But more than likely the fiji is draw call starved and cant make use of all of its cores at lower resolutions.

But the test that can reveal this will be available on the 20th when the 'Ashes of the singularity' benchmark drops for the public with directx12 support.
 
It is the low clockspeed @1080p holding it back, this is why when you overclock it you get a performance increase.

The problem is nothing to do with the GPU core as it would be there @2160p as well. The problem is the core is getting bottlenecked by the HBM @1080p lol.

DM did not see that one coming lol.

Problem has nothing to do with the cores but is entirely about the memory because HBM sucks.... but a 290x with gddr5 is more competitive with a 980 gtx at high resolution than at 1080p.... that card ALSO scales better at a higher resolution.

Here's a hint, you can load up shaders much more efficiently the more work you ask them to do. At lower resolution cores are being less efficiently utilised and this effect can be seen on many AMD cards which have often provided more based grunt(shader power) and less front end power. Nvidia often has had more rops or higher clocked rops which helps significantly when you aren't shader limited but they have also lose ground at higher resolutions. Architectures get balanced, the balance can be achieved where ever you like it. AMD made a card with ridiculous bandwidth and loaded it up with shaders and less rops for a higher resolution aimed at card.

290x was also designed with that in mind, the 7970 vs 680gtx also had that in mind. AMD aimed the card at higher resolution with more bandwidth and the card was stronger than the 680gtx the higher you went in resolution.
 
And i can just as easily say that the ram on the titan x is holding it back until it is overclocked.

But more than likely the fiji is draw call starved and cant make use of all of its cores at lower resolutions.

But the test that can reveal this will be available on the 20th when the 'Ashes of the singularity' benchmark drops for the public with directx12 support.

Funny you mention the Titan X, have you noticed how a TX or 980 Ti@1080p will totally thrash a Fury X when both cards are @stock.

What is the difference, one uses slow clocked HBM and the other fast clocked DDR5.
 
Back
Top Bottom