• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA’s Neural Texture Compression - 90% Less VRAM Usage

So 7zip archives are degrading the files I backup every day? :D



Close enough is often good enough - especially for the purpose of this topic: textures.

Does it actually matter if when the texture is "recreated" using AI, that a couple of pixels have RGB values that are off by single digits?



I could almost agree with you that differences in Audio files are more easily noticed, but then you rolled out the Audiophile bingo card with things like "fuller" and "distinguished" :D
Close enough is admission there is degradation.

I'm not talking about can people notice the difference.

It either degrades or it doesn't, so what argument are you trying to state.

Moving images is where compression hits the hardest and most noticeable degradation occurs i.e games and movies/ shows. Exactly what compression tech does and will do.

Games don't use key frames but movies and shows do when being broadcasted, so compression makes it even worse there.

Regards to how I describe audio, that's my own personal take, it's less tinny if you prefer that expression and less muffled if you prefer.

Nothing wrong with using compression just it's absolutely needed to be said along side it that expect degradation.
 
Not always, lossy Vs lossless compression is a thing.
The trade off is usually the need for processing power to uncompress on the fly.
There's no true expression of lossless being unaffected, it's always described as close enough to the average person.

Mean there's a reason why the keep making new methods and codecs to get to better then where it was, if things were truly 'lossless' then why keep creating new compression tech?
 
There's no true expression of lossless being unaffected, it's always described as close enough to the average person.

Mean there's a reason why the keep making new methods and codecs to get to better then where it was, if things were truly 'lossless' then why keep creating new compression tech?
To make them more efficient.
And yes there are many examples of truly lossless compression. It's easy to test as well.
 
Last edited:
There's no true expression of lossless being unaffected, it's always described as close enough to the average person.

Mean there's a reason why the keep making new methods and codecs to get to better then where it was, if things were truly 'lossless' then why keep creating new compression tech?
The clue is in the name lossless :confused:

Answer the question, when I use 7zip to compress files, are they degraded when later extracted?
 
For audio, jumping from that crappy MP3 to 24 bit audio files is huge.
Incorrect.

Here is a test I just conjured up, Sample A and Sample B.

The source file was a 24-bit FLAC file, I trimmed the file and saved it as a FLAC in one sample exactly as is. The other sample was converted to MP3 using the Audacity Extreme preset for vbr which is 210-270Kbps bitrate and it 16-bit. I then transcoded that mp3 back to a 23-bit FLAC.

If the difference is as big as you say, then you should be able to tell which sample was the original FLAC file and demonstrate how you have come to that decision rather than just guesswork. Note that the file sizes are irrelevant as I have deliberately tweaked the file size of each so no funny ideas.

Sample A:

Sample B:


To my ears and equipment, Spotify easily sounds as good or better than Tidal lossless, but the bit caveat is that the source masterings dictate that, Spotify seems to have the bigger share of better quality masters so even though it's compressed, that makes nod difference to how the music sounds for those tracks against a lossless platform that doesn't have the same master.
 
Last edited:
There's no true expression of lossless being unaffected, it's always described as close enough to the average person.

Mean there's a reason why the keep making new methods and codecs to get to better then where it was, if things were truly 'lossless' then why keep creating new compression tech?
lossless = save as much space as you can without losing data
lossy = I'm willing to lose a bit of data in exchange for even more space savings
there are new algorithms over time because previous attempts weren't perfect, and processors improve, and requirements change.
 
Compression means loss of image quality
Doesn't matter if it's imperceptible.

Look at how modern video compression works and you'll be amazed just how much data can be encoded (and/or discarded) in a certain with virtually no perceptible difference.

I'm not sure there is much data we work with these days that is stored in it's absolute raw form.
 
Cousin of mine is an audiophile, I swapped out 1 of the albums he was listening to regularly that was "super mega high bit rate mega super" for one that was 320kbps and he didn't notice until I said something a year later :cry:

The amount of BS the audiophile community comes out with is quite entertaining :D

MP3 works by removing audio frequencies that the human ear can't hear, it is technically a reduced quality file just not in a way that anyone would notice, including your audiophile friends, they are not dogs. :D

Image compression works by removing duplicate and near duplicate bit blocks and restoring them on decompression.

AMD are developing a similar technology, they reduced a 34.8 GB tree scene to 51 KB, which is no doubt the most extreme example they could come up with but still i think you will agree is quite mind-blowing, basically a 99.99999999% size reduction.

https://www.tomshardware.com/pc-com...cpu-work-to-the-gpu-yields-tremendous-results

Also...

Trees aren't the only objects that can be rendered with this paradigm. We can expect other objects, and possibly even textures, to be rendered this way in the future. Nvidia is already working on neural texture compression to reduce texture demands on video memory, but work graphs and mesh nodes provide another method of achieving the same goal (and will not be limited to Nvidia GPUs).
 
Last edited:
MP3 works by removing audio frequencies that the human ear can't hear, it is technically a reduced quality file just not in a way that anyone would notice, including your audiophile friends, they are not dogs. :D

Image compression works by removing duplicate and near duplicate bit blocks and restoring them on decompression.

AMD are developing a similar technology, they reduced a 34.8 GB tree scene to 51 KB, which is no doubt the most extreme example they could come up with but still i think you will agree quite mind-blowing, basically a 99.99999999% size reduction.

https://www.tomshardware.com/pc-com...cpu-work-to-the-gpu-yields-tremendous-results

Also...
That AMD demo is not compression at all. It's procedurally generating the scene. Very different.
 
That AMD demo is not compression at all. It's procedurally generating the scene. Very different.

Yes its a different technology but achieves a similar result, "Procedural tree generation" BTW is how tree scenes are made in game development, trees are not placed manually, that would take too long, they are generated by the engine based on predefined rule sets, its not that AMD are "procedurally generating trees replacing real ones" or something in some sense like that its that its that they are applying the technology to what is commonly used in game development.

Its different to "Neural Texture Compression" in that its compressing the entire scene, everything, not just the textures, i think Toms Hardware do a poor job of explaining that, probably misunderstand it.
 
Last edited:
Yes its a different technology but achieves a similar result, "Procedural tree generation" BTW is how tree scenes are made in game development, trees are not placed manually, that would take too long, they are generated by the engine based on predefined rule sets, its not that AMD are "procedurally generating trees replacing real ones" or something in some sense like that its that its that they are applying the technology to what is commonly used in game development.

Its different to "Neural Texture Compression" in that its compressing the entire scene, everything, not just the textures, i think Toms Hardware do a poor job of explaining that, probably misunderstand it.
Procedural generation is not compression.

Neural Texture Compression, and I’m going purely off the name here, is using neural networks to find new ways to compress textures beyond traditional algorithmic approaches. We’ll see more and more improvements coming like this rather than banging heads against transistor counts and brute forcing rendering.
 
Procedural generation is not compression.

Neural Texture Compression, and I’m going purely off the name here, is using neural networks to find new ways to compress textures beyond traditional algorithmic approaches. We’ll see more and more improvements coming like this rather than banging heads against transistor counts and brute forcing rendering.

Yeah, i find all this mind-blowing, its lol answer to the lack of VRam complaint, what a way to rebut that.... :D

There are 2 million trees in this scene, i did not place them manually... ;) I'm also using Nanite, running on an RX 7800 XT, it runs bad, i pushed it as far as i could out of curiosity and without Nanite it would run at about 1 frame per 10 minutes, you see, or to those that think i do i don't hate Nanite, i just hate seeing improper use of it, Nanite has its own performance cost, you would only use it if that cost is less than not using it, i feel like some game developers use it just because its something that exists.

 
Last edited:
I've long thought ( yet have zero proof ) that nVidia uses some form of image / texture compression in their general pipeline which was an area that gained them a performance advantage over AMD. My opinion originally being based on side by side comparison examples on review sites on games where there were leaves and similar objects. The textures on nV I felt just had a bit of softness to them which AMD didn't, with my gut feeling being it looked like like how image compression would affect clarity.

I even feel it might be the case in the 2D desktop environment. Years ago, I commented in a post here about how I felt that the picture quality from my 2400G apu had more clarity than the discrete nVidia 1080ti fitted into the machine. This was using the same hdmi cable and 4k TV display and going between each source on the computer. I know that shouldn't be the case given a digital signal path down hdmi ... so any difference would have to be at original source.

I still have the same 2400G and 4K TV ( different motherboard tho) , and just last week I installed a 3060ti into the machine ... and you know what, again, it feels like the nVidia card image quality just lacks a little something that the 2400G had in terms of clarity on the desktop environment.

So it really doesn't surprise me that nVidia now would be putting out some form of texture compression as a feature. Cause I think they've been doing it for years !
 
Last edited:
I've long thought ( yet have zero proof ) that nVidia uses some form of image / texture compression in their general pipeline which was an area that gained them a performance advantage over AMD. My opinion originally being based on side by side comparison examples on review sites on games where there were leaves and similar objects. The textures on nV I felt just had a bit of softness to them which AMD didn't, with my gut feeling being it looked like like how image compression would affect clarity.

I even feel it might be the case in the 2D desktop environment. Years ago, I commented in a post here about how I felt that the picture quality from my 2400G apu had more clarity than the discrete nVidia 1080ti fitted into the machine. This was using the same hdmi cable and 4k TV display and going between each source on the computer. I know that shouldn't be the case given the digital signal path down hdmi, but I really felt there was a difference.

I still have the same 2400G and 4K TV ( different motherboard tho) , and just last week I installed a 3060ti into the machine ... and you know what, again, it feels like the nVidia card image quality just lacks a little something that the 2400G had in terms of clarity on the desktop environment.

So it really doesn't surprise me that nVidia now would be putting out some form of texture compression as a feature. Cause I think they've been doing it for years !

I read absolutely ages ago, Around the RTX 2000 series launch, About Nvidia using some type of texture compression that was made possible with GDDR6 on the 2000 series.

Can't find anything on Google but I do remember reading about it.
 
Last edited:
Found the old picture I took when I posted about it. Camera on tripod, equal manual exposure settings. Take one pic, unplug cable and plug into the other source, take another pic.

To me, the 2 look like they have the same general exposure in terms of white background brightness ... but there is a difference in the sub-pixel aliasing / values at the edges of text... so to me that must come from source.

GFrIJY2.png


As I understand it, cleartype in Windows calculates the font rendering and sub-pixel values on the CPU and then passes that to the GPU to render. If both GPU's are being instructed to render the same R/G/B values for sub-pixels and the display route after the GPU ( cable and TV ) remains the same, then if there is a visual difference ( like the image would appear to show ) then that difference must come from the GPU render pipeline. Hence my thoughts that the general nVidia pipeline has long used compression somewhere to boost performance.

Either way ... its tiny pixel peeping difference. I dont lose sleep over it.
 
Last edited:
This will take up computational resources of your GPU to perform any on the fly compression and decompression. But the same hardware will be required for all the other stuff too - the lower end hardware which would benefit the most from this is already quite cut-down a lot.Plus there will be probably be additional CPU overhead via your drivers. Maybe there will be some dedicated hardware in future cards do offload this - consoles already do this for I/O.
 
Last edited:
This will take up computational resources of your GPU to perform any on the fly compression and decompression. But the same hardware will be required for all the other stuff too - the lower end hardware which would benefit the most from this is already quite cut-down a lot.Plus there will be probably be additional CPU overhead via your drivers. Maybe there will be some dedicated hardware in future cards do offload this - consoles already do this for I/O.
Feels like every gen they come out with a new 'uses less vram'.

NV doing everything they possibly can bar adding physical Vram to their gpus.
 
Feels like every gen they come out with a new 'uses less vram'.

NV doing everything they possibly can bar adding physical Vram to their gpus.

I was making more of a general comment - I think AMD and Intel might be also looking at similar things too. If you look at the demo in the article,it's on a fixed object,relatively low detail object with only 15 textures and it was run on an RTX5080. I wonder how this will work in practice in realtime at 60FPS,on a lower end card whilst it needs to do additional upscaling,RT,etc? I suspect this will run better on higher end cards(higher FP8/INT8 throughput),which ironically will need this less because of enough VRAM. I just wonder if we will get some dedicated hardware block to do this insteading of general purpose hardware.
 
I remember when DX11 was still being "cooked up" that it was going to be a game changer for multi GPU vram usage, Xfire or SLI, it didn't matter apparently. Didn't happen did it?

Around the same time that SLI was "in the works" for Batman Arkham Knight. Also didn't happen, for different reasons of course, shoddy console port.

Take everything with a pinch of salt.
 
if it gives a 8gb card the ability to create textures at 95% normal where gameplay is good rather than the frames dropping through the floor, i'm an in..however the downside I'm guessing is that this is another cog in the process, so much like multiframe, is going to add latency to the game? or will it not be a factor?
be a bit like driving my brother in laws toyota pickup...turn the wheel and a 2nd later i know we're turning when the horizon tilts and I feel the body lean. perfectly fine for the pickup, and adds to the fun, esp offroad, but not really welcome in a more sport environment
i
 
Back
Top Bottom