• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

no one said the 3080 cant give good fps at 1440p, but it's architecture is scaled more for performance at higher resolutions. Its still obviously going to be great at 1440p in graphically demanding games.
It was that Dave guy who keeps saying 3080 is wasted at resolutions below 4k when it obviously isn't, but then again i believe he also said 10GB wasn't enough for 4k and then went around and bought one himself.
 
It was that Dave guy who keeps saying 3080 is wasted at resolutions below 4k when it obviously isn't, but then again i believe he also said 10GB wasn't enough for 4k and then went around and bought one himself.
Half of the posts on this forum are guff. If you know something doesn't make logical sense then just let it go in one eye and out of the other. :)

I still have concerns about 10GB being enough to last the generation, but now that AMD have failed so hard to produce a card at the right price point with even remotely purchasable quantities, it has now made the 3080 10GB look more reasonable even despite the valid VRAM concerns. I didn't expect it to happen that the average 6800XT would cost 20% more than a 3080 10GB, but somehow it did.
 
Last edited:
and then went around and bought one himself.

I've been having a chuckle at the Ampere purchasers especially 3090s who've spent the last few months full on anti-nVidia even to the point of knowingly misrepresenting information to attack them, digging up everything they can to attack the Ampere launch, while full on pro-AMD and talking up the 6800 as an nVidia killer.
 
I've been having a chuckle at the Ampere purchasers especially 3090s who've spent the last few months full on anti-nVidia even to the point of knowingly misrepresenting information to attack them, digging up everything they can to attack the Ampere launch, while full on pro-AMD and talking up the 6800 as an nVidia killer.

I really want to see how the AIB partner 6900 XTs go like the Asus Strix LC, if that turns up soon things could get interesting.
 
Last edited:
The Asus 6800 XT Strix LC outperformed the reference 6900 XT, thats what makes a 6900 XT version interesting.:D
Yes, because the 6900XT is only 5-10 percent better than the 6800XT in most cases, so when the 6800XT is overclocked it will make up that gap. When the 6900XT is overclocked it will also be 5-10% hgiher.

There is really much remarkable about it... it's normal.
 
Navi2 is hamstrung by its Memory configuration, even with the infinity cache the memory width is still not good enough as shown by its domination in lower resolutions but falls off a cliff at 4k massively... If they had given that a 512bit memory bandwidth i bet 4k is a totally different story, my guess is the trade off was too much more heat and power, this is the first iteration of infinity cache on a GPU though, i bet next go they'll address the glaring issue of bad 4k performance in comparison to its 1080p and 1440p performance and balance the card better.
 
I still stick to my belief that techniques like Infinity Cache are the preserve of console type application, short of some crazy breakthrough, where developers have full insight of it and always working with it in mind - PC development, even games that come from the console to PC, just doesn't work like that.

It is why things like hybrid SSD caching, etc. only work so far and no more.
 
It was that Dave guy who keeps saying 3080 is wasted at resolutions below 4k when it obviously isn't, but then again i believe he also said 10GB wasn't enough for 4k and then went around and bought one himself.
Haha. Yeah, that was awesome to see. Not seen him around much these days :D
 
I still stick to my belief that techniques like Infinity Cache are the preserve of console type application, short of some crazy breakthrough, where developers have full insight of it and always working with it in mind - PC development, even games that come from the console to PC, just doesn't work like that.

It is why things like hybrid SSD caching, etc. only work so far and no more.

Problem is right now we know exactly zero with regards to how much inifity cache really contributes to Navi2's performance... We are all fairly certain though that the memory width is its achilles heel.. Can only imagine the cost and heat if they had opted for a full 512mb, unless they can come up with a way to leverage infinity cache more fully, the 4k performance is always going to be poor compared to its lower resolution performance, where the memory bandwidth is not holding it back.

Supposedly a 55% hit rate on the cache? imagine if they could fully soak that at 4k, that would definitely give them more performance.

But developers may never be able to exploit it, as you say it may work on consoles but on PCs the performance loss traversing the busses etc probably makes it non worthwhile.
 
Navi2 is hamstrung by its Memory configuration, even with the infinity cache the memory width is still not good enough as shown by its domination in lower resolutions but falls off a cliff at 4k massively... If they had given that a 512bit memory bandwidth i bet 4k is a totally different story, my guess is the trade off was too much more heat and power, this is the first iteration of infinity cache on a GPU though, i bet next go they'll address the glaring issue of bad 4k performance in comparison to its 1080p and 1440p performance and balance the card better.

Its not RDNA 2 which is hamstrung by memory bandwidth. The 4k scaling is normal. Its just that Ampere's FP32 throughput remains largely unutilized at all resolutions below 4k. Its only at 4k that the architecture is finally able to stretch its legs.
 
In terms of getting the most out of these caching systems the hit rate is the problem - on consoles developers can easily optimise to maximise the hit rate and work within the envelope of the hardware - on PC that is an enormous task with lots of variables.
 
I've been having a chuckle at the Ampere purchasers especially 3090s who've spent the last few months full on anti-nVidia even to the point of knowingly misrepresenting information to attack them, digging up everything they can to attack the Ampere launch, while full on pro-AMD and talking up the 6800 as an nVidia killer.


That’s when the turd hits the fan when then fan boys pick there jaws up when they see that they have again lost the round . I’m glad amd is putting up some kind of a fight.
 
Its not RDNA 2 which is hamstrung by memory bandwidth. The 4k scaling is normal. Its just that Ampere's FP32 throughput remains largely unutilized at all resolutions below 4k. Its only at 4k that the architecture is finally able to stretch its legs.

I dont think so, below 4k the performance is great, it drops off a cliff hugely at 4k, its not linear at all as far as i can see, i will still sell my 3080 and buy a 6800XT as i prefer the AMD driver suite, but then i only game at 3440x1440p, so either card is going to be great for my use, having used RTX in Watchdogs Legions and CoD and seeing as its basically unusable in Cyberpunk 2077 on the current gen cards im happy to swap for pure raster performance. Infact from the screenshots ive seen of Cyberpunk 2077 with RT on, it actually looks less atmospheric and worse than with it off.
 
In terms of getting the most out of these caching systems the hit rate is the problem - on consoles developers can easily optimise to maximise the hit rate and work within the envelope of the hardware - on PC that is an enormous task with lots of variables.

This is my point, gambling on a tech that works perfectly fine on a console but will be so hard to work on a PC that Devs just wont want to sacrifice the time... Would have been better leaving the Cache out and just putting better memory width in.
 
NVIDIA official Cyberpunk 2077 RTX 3090 and RTX 3080 performance leaks out
https://videocardz.com/newz/nvidia-...7-rtx-3090-and-rtx-3080-performance-leaks-out

At max 4K settings with RT on even the RTX 3090 is unable to get 60 FPS with DLSS, and turning DLSS off we see none of the graphics cards even reaching 30 FPS!

I don't see the FPS in this title being an issue as it is a single player RPG. I really hope this is a showcase of what can be done with RT and yes I know it's using hybrid rendering.

Ma

I really want to see how the AIB partner 6900 XTs go like the Asus Strix LC, if that turns up soon things could get interesting.

If they had launched one year ago then it would have been interesting. Today I have to ask why buy next gen that doesn't do next gen well? These do overclock well, but overclocking thse days has become as exciting as tuning a TV. I do think AMD have done a great job with RDNA2, but they are not a charity. They need to catch up with next gen features rather than holding the PC back at console levels.
 
Back
Top Bottom