• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The thread which sometimes talks about RDNA2

Status
Not open for further replies.
i think 16GB was a great move, after playing bunch of hours of dawn memory usage has skyrocketed as i gone across the map.
YES, some of this will be cache, but even so 4K @ low is using 10GB of GPU memory & using 15GB of System memory doing a few other things, cannot wait for my 3090...
ppoXEPO.png

1894b102-96d8-43a3-a235-cf0ebe29ed11
 
Diminishing returns isn't it. We need hundreds of times the rendering performance to get anywhere near true photo realism with ray tracing but it will start looking pretty close soon enough. If it wasn't for denoising then I believe 1 Ray per sample would actually look terrible. It certainly does on 3ds max trying to render scenes without tonnes of samples and camera aa.

What will happen soon is that at 5nm we'll get enough raster to drive 4k/120 fps easily with 2-3 times more ray tracing perf, and a gen after that we'll get maybe 10 times more ray tracing, which ought to be enough for everybody :)
 
i think 16GB was a great move, after playing bunch of hours of dawn memory usage has skyrocketed as i gone across the map.
YES, some of this will be cache, but even so 4K @ low is using 10GB of GPU memory & using 15GB of System memory doing a few other things, cannot wait for my 3090...
ppoXEPO.png

1894b102-96d8-43a3-a235-cf0ebe29ed11
Shadow of Mordor with its texture pack used to drink video memory now i recall, remember playing that on Vega 64 and encountering stuttering due to video memory saturation.

Will keep an eye out for this game smogs.
 
Shadow of Mordor with its texture pack used to drink video memory now i recall, remember playing that on Vega 64 and encountering stuttering due to video memory saturation.

Will keep an eye out for this game smogs.
its little broken at the moment, (graphic setting wise, but had no issues apart from that.
restarting the game only using 6GB of VRAM, but as you explore more areas or fast ravel Usage goes up by quite a bit. 1 hour in im using 8GB so 3 hours in you will be using 10GB+ thats on low.

poor 1080TI is struggling though, 4K low only barely 50fps mostly 39-45fps
 
What will happen soon is that at 5nm we'll get enough raster to drive 4k/120 fps easily with 2-3 times more ray tracing perf, and a gen after that we'll get maybe 10 times more ray tracing, which ought to be enough for everybody :)
I think your forgetting that the games keep on getting more demanding so it's not that simple.
 
Nope, though it is still supported on workstation GPUs for some professional apps.

Won't it even support dx12 explicit multi GPU deployed in games like shadow of the tomb raider or Deus ex mankind divided?

I think your forgetting that the games keep on getting more demanding so it's not that simple.

Sometimes you also have to wonder if developers have good understanding of directx 12 to be using it as primary API
 
Its a solid 20% faster than the 3070 with 16GB vs 8GB for $80 more, the price is high but its much better value than the RTX 3070, which also doesn't have enough VRam.

I'm thinking the 6700XT is going to be = to the RTX 3070 with 12GB VRam for $50 less at $449.

I said the 6800 None XT was expensive, it is, its too rich for my blood, Nvidia's cards are more expensive and its not just that they also have less VRam, they don't have enough.

I think the price I got my 1080ti for back in the day has spoiled me rotten. Paid 630GBP for an Nvidia flagship and now the non-flagship 6800 is gonna cost me almost as much because let's face it, tax and other extra's are gonna be added to the price tag. I've had a system for some time that has been pretty spot on. If an item is released for 100 USD MSRP you can usually multiply it by 8.6 and get the circa price in DKK which is why I'm not impressed with the 6800. It's been spot on so far for the RTX 2000 release, monitors, RTX 3000 series, GTX 1000 series. Perhaps I should just wait a month more and get the 6800 XT if the budget allows, I mean, 12% higher price but 15-20% more performance, seems like a no brainer at this point.

I reckon just by the time I'm dead we'll have 120fps photo-realistic VR graphics. Just when I stop caring about porn...great.
Or your last few minutes on earth can be heavenly bliss :P
 
I think the price I got my 1080ti for back in the day has spoiled me rotten. Paid 630GBP for an Nvidia flagship and now the non-flagship 6800 is gonna cost me almost as much because let's face it, tax and other extra's are gonna be added to the price tag.

The 1080ti seems to have been a boon of a card in hindsight. Timing them ones makes you appreciate it in the rougher times. I got lucky with the 290X all them years back.
 
People here have been talking about the smart access memory.
I think its to do with data compression.
AMD did file a patent application (dont have the link, but it was in one of these threads) concerning a three tier compression system, with two algorithms.
Its about compressing with strong but slow, and weak and fast ways, and re-compressing (offcourse, decompression too), and having a smart control over which algorithm to use, depending on what level of cache you are dealing with (obviously, different levels have different latencies, and the relative level of latency has significance impact).
This offers a way to transfer more data through a limited bandwidth.
I wonder, if SAM is to do with that?
 
Better than 500w tbp any day?

----

Any one figured out the infinity cache.
If there's an L3 miss the data from VRAM will be still be fetched at 512 GBps?
Infinity cache seems to be away of catching up whilst using the cheaper ram and keeping overall cost of the card down.
It may get support from developers but it may get forgot like the esram that went in the Xbox one.
I did find it odd that Nvidia went with gddr6x over bigger pool of regular 6 but I guess the speed negates the need for large pool of vram.
 
Infinity cache seems to be away of catching up whilst using the cheaper ram and keeping overall cost of the card down.
It may get support from developers but it may get forgot like the esram that went in the Xbox one.
I did find it odd that Nvidia went with gddr6x over bigger pool of regular 6 but I guess the speed negates the need for large pool of vram.

It's typical of Nvidia. Why use something that would fit the needs just fine if you can reinvent the wheel and let the customer pick up the bill :P.
 
Infinity cache seems to be away of catching up whilst using the cheaper ram and keeping overall cost of the card down.
It may get support from developers but it may get forgot like the esram that went in the Xbox one.
I did find it odd that Nvidia went with gddr6x over bigger pool of regular 6 but I guess the speed negates the need for large pool of vram.

I was under the impression that none of the games they showed benchmarks for were coded for this infinity cache. -And the games seem to perform well anyway.
 
I was under the impression that none of the games they showed benchmarks for were coded for this infinity cache. -And the games seem to perform well anyway.
That's correct infinity cache does not need to be coded for the gou driver will allocate most useful assets to this cache. Think of it the same as sshd.

Most of the used repeated assets or actions will be in cache. However I'm sure amd will produce an api tor developers to leverage and hand select what goes in the cache..

Right now the gou driver will do it
 
Eh AMD gave HBM a go to much the same criticism.

HBM provides higher bandwidth and lower power consumption so that last part is its saving grace. If you want to point fingers at AMD it should be for launching an upper midrange performing gaming gpu(Vega-non frontier) and sticking HBM on it causing the price to be higher than desirable. Gotta love that Raja guy.

So HBM is pretty cool, the way they chose to use it was less so. To me, it makes sense to use HBM in enterprise/prosumer cards, not so much in consumer cards where the price is very important.
 
Status
Not open for further replies.
Back
Top Bottom