• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

What is the point in Vram amount?.

Yeah I can't really be bothered arguing with people, it's basic stuff, and I've no idea if they are trolling or something, but just dismissing them with a go google seems a bit unfair too, so :S
 
Yeah I can't really be bothered arguing with people, it's basic stuff...

Large portion of the population aren't aware of basic stuff. In this case, you are right, because we have a former AMD user (obvious by his user name being AMD's famous product Athlon) who desperately tries to pour disgraceful things.
Probably the idea behind the thread itself is why the Radeon VII has 16 GB and its closest counter-part from nvidia only have 11 GB. Or why RX 580 has 8 GB versions, and GTX 1060 can be limited to frustrating 3 GB.

And to address one "famous" quote of the same member AthlonXP:

Yes VRAM is absolutely pointless now in Windows 10!

To all people claimed 3GB and 6GB dedicated GPU memory will not be enough in 2019 games but it pointless when you have massive shared GPU memory, you will not run out of memory with both 3GB and 6GB dedicated GPU memory.

So, if you don't run out of VRAM, why is the GTX 1060 3GB so weak compared to RX 590?






The problem with saying nothing is that people then come away thinking they're right because no-one disagreed or corrected them, so saying nothing makes things worse, I try to treat the forum as somewhere I come to learn so when I get something wrong please tell me.

Correction and discussion are the building blocks of a true democracy. Ruling parties and opposition, even just for the sake of it...
 
The 3gb 1060 had slower ram as well afaik. But yea it wouldn't make THAT much difference due to that alone.

I don't know where the Fury sits on there, but it's still a pretty quick card when vram isn't an issue (and only 4gb is an issue now). We'll probably see the 6gb 2060 start to struggle by the end of the year too.
 
Last edited:
Yeah I can't really be bothered arguing with people, it's basic stuff, and I've no idea if they are trolling or something, but just dismissing them with a go google seems a bit unfair too, so :S

Too many people lately that even pointing them to the actual, proven, information that shows what they are saying is incorrect - you'll still see them reposting the exact same flawed information to support their point a few weeks later.
 
@AthlonXP1800 If this was the case we wouldn't have GPUs pushing 8GB or 16GB Vram!

Every gamer might has well just installed 32GB system RAM and buy a 2GB Vram GPU LOL

You being very silly here! Its a fact if you go into shared RAM you hurt performance and its also a fact that Shared RAM isn't just system RAM either its the Page file on the HDD or SSD.

Again if what you saying is true! AMD wouldn't have bothered adding High Bandwidth Cache Controller onto VEGA GPU's if you could just use system RAM anyway LOL! Again you wrong!

HBCC lets the user dig into that system RAM to boost performance and gain more VRAM "Performance in some titles improved with HBCC COD Black Ops 4 was one!"
 
The 3gb 1060 had slower ram as well afaik. But yea it wouldn't make THAT much difference due to that alone.

I don't know where the Fury sits on there, but it's still a pretty quick card when vram isn't an issue (and only 4gb is an issue now). We'll probably see the 6gb 2060 start to struggle by the end of the year too.

Well, on 14 August 2017, the R9 Fury X 4 GB was sitting within a couple percentage points of GTX 1070 https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_64/31.html
On 15 November 2018, the R9 Fury X 4 GB fell behind to around 10% less than GTX 1070. https://www.techpowerup.com/reviews/Sapphire/Radeon_RX_590_Nitro_Plus/32.html
 
Well, on 14 August 2017, the R9 Fury X 4 GB was sitting within a couple percentage points of GTX 1070 https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_64/31.html
On 15 November 2018, the R9 Fury X 4 GB fell behind to around 10% less than GTX 1070. https://www.techpowerup.com/reviews/Sapphire/Radeon_RX_590_Nitro_Plus/32.html

One of the reasons there is the change up of the games benchmarked - games like Doom 2016 if Vulkan is in the mix especially tend to pull results in AMD's favour.
 
VRAM isn't everything, not having enough is.

Also I think some games will use up far more VRAM than they need to. Eg. monitoring software will report 8GB used, but the game is only working on 4GB, thus a 6GB card will not suffer any performance issues.
 
Yeah, would not want to have to be using system memory as shared GPU memory. Far too slow in comparison to VRAM
Anyone remember what happened with the 970, and how badly it tanked performance when it accessed and used the 500MB slow vram?

If people think system RAM does the same as VRAM without performance hit, they are seriously mistaken.
 
Too many people lately that even pointing them to the actual, proven, information that shows what they are saying is incorrect - you'll still see them reposting the exact same flawed information to support their point a few weeks later.

Unfortunately this happens a lot as well, People often ignore posts that are correcting them.
 
@AthlonXP1800 If this was the case we wouldn't have GPUs pushing 8GB or 16GB Vram!

Every gamer might has well just installed 32GB system RAM and buy a 2GB Vram GPU LOL

You being very silly here! Its a fact if you go into shared RAM you hurt performance and its also a fact that Shared RAM isn't just system RAM either its the Page file on the HDD or SSD.

Again if what you saying is true! AMD wouldn't have bothered adding High Bandwidth Cache Controller onto VEGA GPU's if you could just use system RAM anyway LOL! Again you wrong!

HBCC lets the user dig into that system RAM to boost performance and gain more VRAM "Performance in some titles improved with HBCC COD Black Ops 4 was one!"

You being very silly boy here! If you think it is a fact shared RAM hurt performance? No, Modern CPUs, GPUs, APUs and Windows 10 now used Heterogeneous and Parallel Computing can now shared RAM more efficiently. Just look at PS4, PS4 Pro, Xbox One, Xbox One X, Switch and Ryzen 5 2400G with Vega 11 APU etc which both CPU and GPU now shared RAM did not hurt performance. Older architectures without Heterogeneous and Parallel Computing support had very poor performance with shared RAM because CPU and GPU did not know how to shared RAM and argued each other the whole time suffered massive performance hit.

When you go from Low graphics setting to Ultra settings and you will see performance hit in games, the same thing go with textures from 2K, 4K, 8K to 16K textures.

Raja Koduri demonstrated HBCC used shared system RAM once saw 50% performance boost in 1 game but in real world Vega owners never saw any huge performance boosts in games, never mind 1 or 2 fps and it never gained extra VRAM but it is fact used 11GB minimum to 24GB maximum memory reserved from shared system RAM. Many Vega owners reported games with HBCC enabled suffered shuttering/leaked so they had to turned off HBCC, many owners think HBCC was useless and I agreed with their view. HBCC was just a marketing gimmick, I cant believed you fell for it. HBCC is not done much better compared to shared system RAM in real world gaming performance.

I googled HBCC on Wolfenstein 2 at 4K with 16K textures but cant find any performance numbers.

Guys who has PC with 32GB with Turing GPUs can use Wolfenstein 2 config settings to use 16K textures from my post in Wolfenstein 2 Vulkan benchmarks thread.

https://forums.overclockers.co.uk/posts/31313724
 
You being very silly boy here! If you think it is a fact shared RAM hurt performance? No, Modern CPUs, GPUs, APUs and Windows 10 now used Heterogeneous and Parallel Computing can now shared RAM more efficiently. Just look at PS4, PS4 Pro, Xbox One, Xbox One X, Switch and Ryzen 5 2400G with Vega 11 APU etc which both CPU and GPU now shared RAM did not hurt performance. Older architectures without Heterogeneous and Parallel Computing support had very poor performance with shared RAM because CPU and GPU did not know how to shared RAM and argued each other the whole time suffered massive performance hit.


I think you need to look up the difference between shared and unified memory and then get back to the thread ....
 
Anyone remember what happened with the 970, and how badly it tanked performance when it accessed and used the 500MB slow vram?

Yes I had GTX 970 back in 2014 with 3770K and 16GB, in games I saw VRAM went up to 4GB and it did not tanked performance but I saw RAM went up and down. I am really not sure if it actually used shared RAM or not. Never experienced performance loss with 16GB but was read many posts from other GTX 970 owners who had performance issues had 8GB RAM.

If people think system RAM does the same as VRAM without performance hit, they are seriously mistaken.

Ryzen 5 2400G with Vega 11 APU did not have VRAM but can used up to 8GB shared system RAM is as fast as RX 550 with 4GB GDDR5 and GT 1030 GT with 2GB GDDR5.

Navi 12 and Navi 16 APU will be very interesting to see how it will perform.
 
You being very silly boy here! If you think it is a fact shared RAM hurt performance? No, Modern CPUs, GPUs, APUs and Windows 10 now used Heterogeneous and Parallel Computing can now shared RAM more efficiently. Just look at PS4, PS4 Pro, Xbox One, Xbox One X, Switch and Ryzen 5 2400G with Vega 11 APU etc which both CPU and GPU now shared RAM did not hurt performance. Older architectures without Heterogeneous and Parallel Computing support had very poor performance with shared RAM because CPU and GPU did not know how to shared RAM and argued each other the whole time suffered massive performance hit.

When you go from Low graphics setting to Ultra settings and you will see performance hit in games, the same thing go with textures from 2K, 4K, 8K to 16K textures.

Raja Koduri demonstrated HBCC used shared system RAM once saw 50% performance boost in 1 game but in real world Vega owners never saw any huge performance boosts in games, never mind 1 or 2 fps and it never gained extra VRAM but it is fact used 11GB minimum to 24GB maximum memory reserved from shared system RAM. Many Vega owners reported games with HBCC enabled suffered shuttering/leaked so they had to turned off HBCC, many owners think HBCC was useless and I agreed with their view. HBCC was just a marketing gimmick, I cant believed you fell for it. HBCC is not done much better compared to shared system RAM in real world gaming performance.

I googled HBCC on Wolfenstein 2 at 4K with 16K textures but cant find any performance numbers.

Guys who has PC with 32GB with Turing GPUs can use Wolfenstein 2 config settings to use 16K textures from my post in Wolfenstein 2 Vulkan benchmarks thread.

https://forums.overclockers.co.uk/posts/31313724

You are crazy.
The PlayStation 4 uses GDDR5 memory for its main system memory. It doesn't have DDR4, mind you.
Also, HBCC is something that was never mentioned by anyone except you. Do you make a difference between HBM2 with 1024 GB/s and HBCC?
 
The GPUs in the consoles are designed around the unified memory pool which has anywhere from ~68gb/sec bandwitdh (xbox one) to 326Gb (one X). All of which are faster than an i9 9900k on on dual channel DDR4. That doesnt mean they wouldnt benefit from more bandwidth, but what it does mean is they are nothing like as hampered as a GPU expecting 500Gb/sec of bandwidth and having to deal with system ram.

Using system ram as a reserve pool for the GPU can most definitely help improve gaming, but it is most certainly not a straight up extention of the VRAM on a discrete GPU and it'll never be used as such until there is bandwidth parity between the memory pools. So, roll on GDDR5 backed CPUs, then. I have no idea where you have got this information from AthlonXP1800, but you are way way off course with your expectations there.
 
The GPUs in the consoles are designed around the unified memory pool which has anywhere from ~68gb/sec bandwitdh (xbox one) to 326Gb (one X). All of which are faster than an i9 9900k on on dual channel DDR4. That doesnt mean they wouldnt benefit from more bandwidth, but what it does mean is they are nothing like as hampered as a GPU expecting 500Gb/sec of bandwidth and having to deal with system ram.

The consoles use a smaller fast cache as well for the GPU - it still isn't ideal though really but games are built around that architecture so it is less of a problem than running out of dedicated VRAM on a PC. One of the problems, that HBCC tried to address, is that on a PC there is no way for the system to know what data to prioritise for VRAM versus paged as well. The problem isn't just memory bandwidth though - on consoles the unified memory has low latency connectivity to the GPU on a PC system memory has relatively high latency from the GPU.

As an aside my 4820K has 59.7GB/s for memory bandwidth stock and IIRC almost 68GB/s with the setup I'm using.
 
Last edited:
You being very silly boy here! If you think it is a fact shared RAM hurt performance? No, Modern CPUs, GPUs, APUs and Windows 10 now used Heterogeneous and Parallel Computing can now shared RAM more efficiently. Just look at PS4, PS4 Pro, Xbox One, Xbox One X, Switch and Ryzen 5 2400G with Vega 11 APU etc which both CPU and GPU now shared RAM did not hurt performance. Older architectures without Heterogeneous and Parallel Computing support had very poor performance with shared RAM because CPU and GPU did not know how to shared RAM and argued each other the whole time suffered massive performance hit.

When you go from Low graphics setting to Ultra settings and you will see performance hit in games, the same thing go with textures from 2K, 4K, 8K to 16K textures.

Raja Koduri demonstrated HBCC used shared system RAM once saw 50% performance boost in 1 game but in real world Vega owners never saw any huge performance boosts in games, never mind 1 or 2 fps and it never gained extra VRAM but it is fact used 11GB minimum to 24GB maximum memory reserved from shared system RAM. Many Vega owners reported games with HBCC enabled suffered shuttering/leaked so they had to turned off HBCC, many owners think HBCC was useless and I agreed with their view. HBCC was just a marketing gimmick, I cant believed you fell for it. HBCC is not done much better compared to shared system RAM in real world gaming performance.

I googled HBCC on Wolfenstein 2 at 4K with 16K textures but cant find any performance numbers.

Guys who has PC with 32GB with Turing GPUs can use Wolfenstein 2 config settings to use 16K textures from my post in Wolfenstein 2 Vulkan benchmarks thread.

https://forums.overclockers.co.uk/posts/31313724

Again if what you saying is true "Which it isn't" Then AMD would not needed to create a "CONTROLLER" When they developed High Bandwidth Cache Controller "HBCC" for compute and gaming workloads. While I do agree games did see mostly nothing from this! They was some cases they did see a massive performance boost! I have proof that Call of Duty Black ops 4 with HBCC enabled seen a massive boost in frame rate and smoothness.

Now lets look at what happens when I enable HBCC look at now dedicated Vram!! Again AMD wouldn't need to do this if what you saying is true! "Which it isn't"! Everyone in this thread isn't wrong!

572720193273697d2498b2d911e2b5303d992b51f90147352079d64e37f0af5d3100dd83.jpg

Don't get me started on the console rubbish you spoke about!

Edit

This comparison even shows HBCC loading faster! I never thought about this. I know under compute workloads HBCC helps speed up timelines and it would seem games also load faster.


Edit 2
HBCC shows less drops here on RE7 when the guy is turning into new parts of the house the texture stream loves the HBCC. While at this performance it wouldn't be noticeable but its still a gain in performance either way.
 
Last edited:
Back
Top Bottom