• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
I am not judging anyone fullstop.

I am quite old and starting gaming seriously in the early MS DOS PC days when graphics were dreadful but for me it was not important as I was far more interested in the actual gameplay.

Interestingly the quality of games in those days was far superior to the rubbish that is available now which look very pretty and not much else.

I have been thinking in the last year about not posting on this forum (or any other) anymore as I agree with you that I don't contribute anything of any real interest. I was a gamer for decades before I belonged to any forum and I think that may be the way forward again.
Just ignore him mate. He does not speak for everyone here. You certainly contribute a hell of lot more to this place than he does. All he seems to do is go around on his high horse speaking down at people he does not agree with.

He is the guy that went around calling anyone who purchased a 3090 a mug, then went on to eventually buy one himself. Lol.
 
cath-kidston-mugs-1.jpg


^ I will be the Cheshire cat one.
 
How do you optimise for more VRAM? AMD will definitely need to optimise for slower VRAM and a smaller memory interface.

And you're correct, it's not rocket science, but it is computer science, which I suspect you know very little about based on your comment :D

They already have it's called infinity cache. https://youtu.be/JdbqAoyk0Mg
There are suggestions that Intel is aiming for a similar function on their gaming gpu. In this particular aspect of gpu design it appears that Nvidia is behind AMD

The cache appears not to store game texture assets but instruction sets such as for ray tracing
 
They already have it's called infinity cache. https://youtu.be/JdbqAoyk0Mg
There are suggestions that Intel is aiming for a similar function on their gaming gpu. In this particular aspect of gpu design it appears that Nvidia is behind AMD


infinity cache = a way to cut costs for the lower memory bus bandwidth and cheaper/slower VRAM. Reality of why it was done. If you believe it was done for any other reason, you are being brainwashed by marketing.

It's a good tech to keep costs down but nothing more than that.
 
Last edited:
I asked you what area you were interested in. You didn't answer, thus I picked the simplest that has been discussed many times. Bottom line, RDNA2 is a budget console chip where the PC variants don't even have the optimisations that both Sony and Microsoft have added. Ampere on the other hand is a fully fledged system incorporating Tensor, ray tracing and a CUDA based GPU all working in parallel.

If you know how the tech works, why then describe it as marketing?

The AMD 6800 and 6900 series of cards do appear to have an advantage over at least the PS5 in chip design as they have infinity cache which the PS5 doesn't have, it is likely to be the same for the Xbox series X. If RDNA2 is a budget console chip then what does that make the Nvidia 3070 and below as the AMD 6800 series outperform them in most games? https://youtu.be/-Y26liH-poM

The 6800 series often beats the 2080ti, hope that those 2080ti buyers knew that they were buying a budget console gpu when they splashed out a grand plus
 
Last edited:
infinity cache = a way to cut costs for the lower memory bus bandwidth and cheaper/slower VRAM. Reality of why it was done. If you believe it was done for any other reason, you are being brainwashed by marketing.

It's a good tech to keep costs down but nothing more than that.

Even if that is all it does is that a bad thing for gamers on a budget?
 
Before Turing, it was normal for gamers to get more performance for the same money from one generation to the next.

This (infinity cache) is the type of innovation that makes that happen.

It's not like manufacturers are just going to make less and less margin until they make n. It's one the manufacturers to not only improve performance, but do so at reduced costs.

That's what progress is supposed to look like
 
I asked you what area you were interested in. You didn't answer, thus I picked the simplest that has been discussed many times. Bottom line, RDNA2 is a budget console chip where the PC variants don't even have the optimisations that both Sony and Microsoft have added. Ampere on the other hand is a fully fledged system incorporating Tensor, ray tracing and a CUDA based GPU all working in parallel.
If you know how the tech works, why then describe it as marketing?
you made a sweeping statement that its more advanced in everyway when i asked you for specifics. you failed to mention any of the architectural advantages at all, which is what i was really looking for, i thought you may have some deeper insight than that, what you've been told by the pr department.
anyway im bored of going in circles, enjoy spouting your narrow field of view but please don't pretend to be upset when someone pulls you up on it.
 
Last edited:
They already have it's called infinity cache. https://youtu.be/JdbqAoyk0Mg
There are suggestions that Intel is aiming for a similar function on their gaming gpu. In this particular aspect of gpu design it appears that Nvidia is behind AMD

The cache appears not to store game texture assets but instruction sets such as for ray tracing

From what I read, it's used to store assets that it might need in subsequent frames (grass, rocks, other small textures), which removes the need to fetch it from the VRAM again. It probably could also store RT instruction sets.

I'm all for companies cutting costs through innovation instead of gimping products.
 
"Ray tracing basically works by having dedicated hardware perform calculations of how the light rays behave, using a technique known as bounding volume hierarchy (BVH) traversal. Performing that task is very memory-intensive, which is why VRAM demands leap up when you enable ray tracing in a game. AMD says it’s able to keep “a very high percentage of the BVH working set” directly inside the Infinity Cache, reducing latency and improving overall performance."

P.S. infinity cache also reduces power consumption compared to a wider memory bus which is good for keeping the pc cooler, quieter, as well as keeping electricity usage and bill down.
 
"Ray tracing basically works by having dedicated hardware perform calculations of how the light rays behave, using a technique known as bounding volume hierarchy (BVH) traversal. Performing that task is very memory-intensive, which is why VRAM demands leap up when you enable ray tracing in a game. AMD says it’s able to keep “a very high percentage of the BVH working set” directly inside the Infinity Cache, reducing latency and improving overall performance."

P.S. infinity cache also reduces power consumption compared to a wider memory bus which is good for keeping the pc cooler, quieter, as well as keeping electricity usage and bill down.
its designed to increase throughput as a primary function, that's for all assets not just raytracing.
 
you made a sweeping statement that its more advanced in everyway when i asked you for specifics. you failed to mention any of the architectural advantages at all, which is what i was really looking for, i thought you may have some deeper insight than that, what you've been told by the pr department.

The PR depertment doesn't draw up the specs :rolleyes:

anyway im bored of going in circles, enjoy spouting your narrow field of view but please don't pretend to be upset when someone pulls you up on it.

I'm sure someone will hand you back your rattle :D
 
So can RX 6000 series skyrocket in ray tracing performance if developers optimize around infinity cache, which is not present in nvidia cards?

Wut, picking a GPU is so hard atm!!

I really do wonder the longeivity of ampere arcihtecture in general, will a rtx 3070 manage to match series x for the entirety of series x's lifetime? i wonder
 
So can RX 6000 series skyrocket in ray tracing performance if developers optimize around infinity cache, which is not present in nvidia cards?

Wut, picking a GPU is so hard atm!!

I really do wonder the longeivity of ampere arcihtecture in general, will a rtx 3070 manage to match series x for the entirety of series x's lifetime? i wonder
Architect specific optimisations will help increase performance.
How much of an increase is the unknown factor.
 
Architect specific optimisations will help increase performance.
How much of an increase is the unknown factor.
my only concern is actually about the kepler... it was released just before ps4/xbox one, and it aged horribly, even in a short span of 2-3 years

maxwell and pascal were released after these consoles, and probably they included something that is similar to console architecture that made the overall optimization and porting easier? maxwell and pascal aged very good (in my perspective ofc).

i hope ampere won't be like kepler... that would he horrible lmao :D

but no one can guarentee anything so...
 
So can RX 6000 series skyrocket in ray tracing performance if developers optimize around infinity cache, which is not present in nvidia cards?

Wut, picking a GPU is so hard atm!!

I really do wonder the longeivity of ampere arcihtecture in general, will a rtx 3070 manage to match series x for the entirety of series x's lifetime? i wonder


I would not expect much more of an increase in 6000 series ray tracing than we have now because of infinity cache. What will make an impact is upscaling which is yet to be used on the 6000 series, then we will get a good idea how the 3000 series and 6000 series compare

It has been said that AMD has provided their current code for upscaling to some game developers. https://youtu.be/LZC3wxwMY_U?t=679

I expect the Radeon 7000 series to be much better at ray tracing than the 6000 series with better hardware support in the gpu
 
Status
Not open for further replies.
Back
Top Bottom