• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Does AMD need more VRAM to do the same thing?

  • Thread starter Thread starter TNA
  • Start date Start date
Watched it and I think it's pretty clear that the drivers play a large part in how the API talks back to the GPU and the GPU then utilises how much VRAM etc. Also I do recall in the past that driver updates fixed various things in terms of VRAM use.
 
Allocation doesn't = usage.

Yeah, need to know a lot more about what the game is doing with it - which is why you can get situations where over-allocated VRAM by 1MB sometimes can bring everything to a halt and other situations you might be 100s or MB or even into the GB over before you see any negative effect.
 
Ampere uses tensor cores to compress and decompress vram data by up to 40%. So 8Gb vram can store up to 11.2Gb of data. I saw YouTube videos years ago comparing Ampere with Turing and the Ampere cards used less VRAM with the same resolution and settings. So yes, AMD does need more VRAM to do the same thing. Nvidia likely improved the technology further with Ada but I haven't read anything on the matter.

Nvidia is also working on a lossless image compression technology. Here's the link

Random-Access Neural Compression of Material Textures
 
Ampere uses tensor cores to compress and decompress vram data by up to 40%. So 8Gb vram can store up to 11.2Gb of data. I saw YouTube videos years ago comparing Ampere with Turing and the Ampere cards used less VRAM with the same resolution and settings. So yes, AMD does need more VRAM to do the same thing. Nvidia likely improved the technology further with Ada but I haven't read anything on the matter.

Nvidia is also working on a lossless image compression technology. Here's the link

Random-Access Neural Compression of Material Textures
That's interesting. Never knew about that.
 
Ampere uses tensor cores to compress and decompress vram data by up to 40%. So 8Gb vram can store up to 11.2Gb of data. I saw YouTube videos years ago comparing Ampere with Turing and the Ampere cards used less VRAM with the same resolution and settings. So yes, AMD does need more VRAM to do the same thing. Nvidia likely improved the technology further with Ada but I haven't read anything on the matter.

Nvidia is also working on a lossless image compression technology. Here's the link

Random-Access Neural Compression of Material Textures

Turns out tensor cores are used for even more than we originally thought then! :p :D

New to me that too though! (the tensor core thing and vram). Is there a link for that as it would be a good read/watch.
 
Last edited:
  • Like
Reactions: TNA
Ampere uses tensor cores to compress and decompress vram data by up to 40%. So 8Gb vram can store up to 11.2Gb of data. I saw YouTube videos years ago comparing Ampere with Turing and the Ampere cards used less VRAM with the same resolution and settings. So yes, AMD does need more VRAM to do the same thing. Nvidia likely improved the technology further with Ada but I haven't read anything on the matter.

Nvidia is also working on a lossless image compression technology. Here's the link

Random-Access Neural Compression of Material Textures

Can you link to whitepaper regarding that. All I saw was a rumour before Ampere was actually released but nothing after that:

Traversal coprocessor: We have had more leaks on NVIDIA's next-gen GeForce RTX 3000 series than any family of graphics cards before it, with an interesting "traversal coprocessor" on the new GeForce RTX 3080 and GeForce RTX 3090 graphics cards. You can read more on that here.

NVCache: Ampere is meant to have something called NVCache, which would be NVIDIA's own form of AMD's HBCC (High Bandwidth Cache Controller, more on that here). NVCache would use your system RAM and SSD to super-speed game load times, as well as optimizing VRAM usage. You can read more on NVCache here.

Tensor Memory Compression: NVCache is interesting, but Tensor Memory Compression will be on Ampere, and will reportedly use Tensor Cores to both compress and decompress items that are stored in VRAM. This could see a 20-40% reduction in VRAM usage, or more VRAM usage with higher textures in next-gen games and Tensor Memory Compression decreasing that VRAM footprint by 20-40%.

How fast is the GeForce RTX 3090? Freaking fast according to rumors, with 60-90% more performance than the current Turing-based flagship GeForce RTX 2080 Ti. We could see this huge performance leap in ray tracing titles, but we'll have to wait a little while longer to see how much graphical power NVIDIA crams into these new cards. You can read more on those rumors here.
Power hungry: As for power consumption, GA102 reportedly uses 230W -- while 24GB of GDDR6X (which we should see on the new Ampere-based TITAN RTX) consumes 60W of power. You can read more on that here.
Production begins soon: NVIDIA is reportedly in the DVT (or Design Validation Test) range of its new GeForce RTX 3000 series graphics cards. Mass production reportedly kicks off in August 2020, with a media event, benchmarks, and more in September 2020 as I predicted many months ago. More on that here.
 
Last edited:
Turns out tensor cores are used for even more than we originally thought then! :p :D

New to me that too though! (the tensor core thing and vram). Is there a link for that as it would be a good read/watch.

Tensor Memory Compression: NVCache is interesting, but Tensor Memory Compression will be on Ampere, and will reportedly use Tensor Cores to both compress and decompress items that are stored in VRAM. This could see a 20-40% reduction in VRAM usage, or more VRAM usage with higher textures in next-gen games and Tensor Memory Compression decreasing that VRAM footprint by 20-40%.
Read more: https://www.tweaktown.com/news/7439...0-series-card-could-have-24gb-vram/index.html
 

That was a rumour before Ampere was launched. It stated Ampere had a Traversal coprocessor(which it didn't),NVCache(which AFAIK it does not,because HBCC needs HBM memory),etc. This is a detailed overview of how raytracing is done in Ampere/Turing/RDNA2/RDNA3:

I tried searching for Tensor Memory Compression on the Nvidia site and nothing comes up.
 
Last edited:
Can you link to whitepaper regarding that. All I saw was a rumour before Ampere was actually released but nothing after that.

I probably read the same rumour that you did because it was about 5 years ago. Here's the Turing Whitepaper. Page 21 discusses Turing Memory Compression. However, it's just bandwidth compression and not VRAM compression. There's no mention of VRAM compression so maybe it's a red herring and the rumours got the two confused. I guess the Youtube comparison video I saw with Turing using less VRAM than Pascal could have been showing VRAM allocation and not usage.

 
I probably read the same rumour that you did because it was about 5 years ago. Here's the Turing Whitepaper. Page 21 discusses Turing Memory Compression. However, it's just bandwidth compression and not VRAM compression. There's no mention of VRAM compression so maybe it's a red herring and the rumours got the two confused. I guess the Youtube comparison video I saw with Turing using less VRAM than Pascal could have been showing VRAM allocation and not usage.


Well we knew Nvidia put a lot of effort into bandwidth compression when Maxwell came out,and AMD had to go OTT with memory bandwidth to sort of compete. Although it did seem last generation they stole a march on Nvidia with RDNA2,Nvidia came back this generation and get away with less memory bandwidth.

WRT to the recent neural texture compression research,I would imagine since these are using the Tensor cores(which are also heavily used for DLSS,etc),this might need more throughput in that area(probably why an RTX4090 was used). Might be interesting to see what the RTX5000/RTX6000 series does in that regard.
 
IDK, but NV's had years of skimping out on the vram so they must have learned how to pack suitcases better.

TBF,AMD really should work on it too. Needing bigger memory controllers and more VRAM does cost them more money. Plus for their APUs and consoles it would definitely help!

But TBH,if you have looked at how much VRAM increases have stagnated since 2016 from both companies(compared to 2008~2016),definitely there is optimisation work being done.

We should be on high end cards with over 100GB of VRAM by now if the 2009 to 2016 percentage increase had kept pace(1.28GB to 12GB). The RTX4090 is 3X to 4X faster than a Titan Xp but only doubles VRAM capacity.

The issue is that we are stuck with 8GB cards selling upto £400 still. Even consoles can allocate more VRAM now.
 
Last edited:
TBF,AMD really should work on it too. Needing bigger memory controllers and more VRAM does cost them more money. Plus for their APUs and consoles it would definitely help!

But TBH,if you have looked at how much VRAM increases have stagnated since 2016 from both companies(compared to 2008~2016),definitely there is optimisation work being done.

We should be on high end cards with over 100GB of VRAM by now if the 2009 to 2016 percentage increase had kept pace(1.28GB to 12GB). The RTX4090 is 3X to 4X faster than a Titan Xp but only doubles VRAM capacity.

The issue is that we are stuck with 8GB cards selling upto £400 still. Even consoles can allocate more VRAM now.
True, but NV are running faster Vram, and faster it needs when sometimes it starts a long conversation with it's friend System Ram.
 
True, but NV are running faster Vram, and faster it needs when sometimes it starts a long conversation with it's friend System Ram.

For some reason AMD paired down the Infinity Cache size on Navi 32 and Navi 31. Plus Navi 33 still has the tiny Infinity Cache amount Navi 23 had. No doubt Nvidia has hogged most of the GDDR6X supply,but I do wonder whether having to use more MCD chiplets actually makes sense,over a larger die size.

I know they want to cut down on die sizes,but sometimes I really think they are going too far with this. If you have to use more MCD chiplets,more memory chips,bigger PCB,etc all adds to more complexity,power and cost.

@KompuKare sometimes talks about this too.
 
Last edited:
As someone said in the comments section shame that Intel was not added but having just picked up a 3060ti I'm sad that some people where calling 8GB card worthless and junk for months and it came to nothing really as I was waiting for the market for 8GB cards to crash and burn. :D
 
Last edited:
Some will think I am joking, but I don't joke when it comes to ice cream and tea :eek:

9DQkMcd.jpg



I'm fresh out of Fab :(

Honesly Rowntrees don't hold a candle to Calipos

EDIT: should probably say something about GPUs to remain on topic. GRRR GPU manufacturers, over priced... mine is better than yours, i like this even if you don't that make me right. Mine has better drivers than yours, yes it does, no it doesn't yes it does. This guy on some random site who plays this game i've never heard of crashed... haahah your drivers suck so lame. My features are better than your features, yours suck because it can't do this feature that's really good even though that feature doesn't feature in the games you like. Look at my graph it's bigger.

I think I covered all recent news.

I'm going to bed it's been a rough day...
 
Last edited:
Back
Top Bottom