• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

How will Nvidia react to the AMD 6000 series launch?

Exactly this.

It's good to see Nvidia have competition again but unless AMD can counter DLSS (something in the works is not enough) then any big title will perform better on the 3000 series because you can be sure Nvidia will pay the devs to support it. On top of this you have AMD's implementation of RT running just in the normal pipeline. We all know how that performs on a 1080ti and that's why Nvidia used tensor cores to offload the RT calculations on the 2000 and 3000 series.

All this is speculation though, time will tell when we see some real reviews with a wide range of benchmarks.
It's great AMD have bounced back though, competition will drive down the prices for all of us going forward so what ever happens it's win win :)

I mean it's fair, I knew when we saw the Nvidia benchmarks on their "launch" they were spun for RT and 2x boasts but we all knew we'd be looking at more like +35% again on top of last gen, and that honestly was enough for many of us to be excited anyway. So we just have to reserve judgement with AMD and give them the same treatment. Are these benchmarks legit, are they cherry picked, does infinity cache just give them a straight up win or does it need to be optimized for every game? All perfectly valid and open questions.

I do think that AMD does have some RT specific transistors in their CUs now which speed up the RT calculations x10, but it really just depends on how much this translates to in terms of raw RT power, will it be enough to do say basic global illumination of ray traced reflections? Who knows. Their performance wont be on par with like a GTX 1000 series though, that's for sure because raw ray tracing essentially in "software", that is to say just running on unoptimized general purpose transistors would be very bad. Pretty sure they're not doing that.

If that's the best possible spin you can put on it then AMD have had a very good day indeed. I really wouldn't have gone with the "The 3090 isn't really a gaming card" routine though, it's used for gaming by gamers, it's a gaming and prosumer card. That reeked a bit of desperation. Also throwing shade at AMD's "launch" after NV's absolute cluster***k of a launch from Jensen's kitchen to the present day where people still haven't got cards they paid for on day 1 isn't a good look either. Obviously you did emphasise RT as that's clearly the fig leaf du jour.

5/10.

The 3090 not being a gaming card is fairly well accepted among many of us and the pro reviewers, and long before AMDs lineup appeared, it's not a reflexive statement to their launch. There's even a thread quoting this from Gamers Nexus https://www.overclockers.co.uk/foru...card-gamers-nexus-destroys-the-3090.18900140/ saying it word for word and I fully agree with all their criticisms. It's what put me off buying a 3090 even though I have the cash for such a purchase. And honestly AMDs lineup for this spot is dead on, the 6900XT looks like a fantastic piece of hardware trading blows with a 3090 for much cheaper, right in the kind of preferred price point I'm comfortable at, it has that slight premium in speed but the premium in price won't make your eyes bleed, or at last not mine.

As I said I'm looking to upgrade my rig to have a better gaming experience, I don't do competitive online gaming anymore like I used to back in the day, I don't need 400-500fps in CS with 1ms latency. I just want pretty games. The extra raster performance of both camps is nice but not worth much to me if it doesn't buy me new graphical effects. I want an improvement on last generation and that is now coming more from ray tracing, so that matters to me. And so I want to know if I'm going to get immersed into a world of CyberPunk 2077 that I'm able to both use the RT effects and ideally not be punished down into 1080p resolution from my normally expected 4k. It's an open question if AMD will provide that, IF they can then I'll cancel my 3080 preorder and almost certainly get a 6900 XT. But like I waited to see user reviews of Nvidias lineup I'll be waiting to see AMDs.

I know you think that it's like some kind of mental cope to deal with Nvidia getting crushed or whatever, but it's not. I've been in tech for more than 25 years, I work in tech, I've bought high end video cards almost every generation since before Nvidia even existed, and have owned a plethora of cards from both AMD (and ATI) and Nvidia, and 3DFX etc. I like AMD I've used their cards solo, in crossfire (a pair of 4870's) and in dual GPU single card (5970). Stop being so blindingly jaded, it shows through.

hurr durr/10
 

I mean I can list a list of places saying the opposite I just cba the amount of Nvidia fanboys in this thread its a futile attempt, black is white etc nothin personal Gerard.

Im reading above about how Infinity cache is going to be wasted because it needs to be implemented by the game developers.... I swear scares me the high post count some people have and the drivel they spout clearly zero understanding of the infinity cache, hell we still have people spouting about 10gb not being enough vRam and comparing the tech used on the Ampere cards memory to how Amd are doing it with 6000 series....
 
Right now their only real gaming feature advantage is dlss, as for speed in games and drivers, how are you coming to that conclusion considering the cards aren't even out yet?
That's a big advantage a lot of future games will have it Nvidia will make sure of it now they have some competition.
 
I swear scares me the high post count some people have and the drivel they spout clearly zero understanding of the infinity cache

Oooh what is that I smell... grow up.

PS Infinity Cache will have been designed in against current games and an estimation of the future but they currently only have a 50-60% hit rate with current titles and that will go down with future titles and unlike with console development games on the PC by and large won't be designed to optimise usage of it.
 
Last edited:
Benchmarks with/without Ryzen CPU are needed. Without these we're unsure of the comparison to Nvidia.

I'm reluctant to upgrade my CPU/mobo without a clear justification. If we're talking more than +5% with a full AMD setup, that might be enough to sway me into a full upgrade.
 
PS Infinity Cache will have been designed in against current games and an estimation of the future but they currently only have a 50-60% hit rate with current titles and that will go down with future titles and unlike with console development games on the PC by and large won't be designed to optimise usage of it.

One of my major questions is on Infinity cache. Obviously not all data is equal, some is read from the vRAM with regularity which makes it better to cache on chip if you can, where as some is read very rarely which makes it basically pointless to cache. And while infinity cache is a LOT of cache vs other caching systems it's not a lot of data vs the total vRAM size. So what you put in there is going to be extremely important with regards to the improvement you get. I'm extremely interested in how this is decided as we don't really seem to know right now. As I think you're suggesting it might be general rules regarding typical usage we see of games today, is that right? Or could it be like a machine learned algorithm that decides? Or is this going to be a per-profile application decided thing which requires constant attention to games. Could there be some games that get huge benefits from this and others that don't? I think these are all valid questions we wont have answered until we can benchmark the hardware against a suite of different games.
 
One of my major questions is on Infinity cache. Obviously not all data is equal, some is read from the vRAM with regularity which makes it better to cache on chip if you can, where as some is read very rarely which makes it basically pointless to cache. And while infinity cache is a LOT of cache vs other caching systems it's not a lot of data vs the total vRAM size. So what you put in there is going to be extremely important with regards to the improvement you get. I'm extremely interested in how this is decided as we don't really seem to know right now. As I think you're suggesting it might be general rules regarding typical usage we see of games today, is that right? Or could it be like a machine learned algorithm that decides? Or is this going to be a per-profile application decided thing which requires constant attention to games. Could there be some games that get huge benefits from this and others that don't? I think these are all valid questions we wont have answered until we can benchmark the hardware against a suite of different games.

I can't imagine it won't have some per-application/scenario learning of most common access patterns - but generally it will be targeted for certain things like lightmaps (although mutability is generally a consideration with these kind of caches) and other assets that are known by nature to be bandwidth heavy.

Depending on how it is implemented the interesting thing here is that some of these are also highly compressible i.e. largely grey scale data used for various types of light and height mapping, etc.

EDIT: With video game data there is a lot of stuff which is fairly standard and identifiable by type fairly easily - especially a lot of big titles use cut and dried techniques - but that can fall apart if someone uses more revolutionary approaches or misuses features towards an unintended end as often happens to fake up effects which aren't possible to do normally, etc.

EDIT2: Removed a couple of things which could be confusing without a longer explanation.
 
Last edited:
I can't imagine it won't have some per-application/scenario learning of most common access patterns - but generally it will be targeted for certain things like lightmaps which might update dynamically, etc. (although mutability is generally a consideration with these kind of caches) and other assets that are known by nature to be bandwidth heavy.

Depending on how it is implemented the interesting thing here is that some of these are also highly compressible i.e. largely grey scale data used for various types of light and height mapping, etc.

The worry with AMD for many is driver problems and if we need to rely on application specific profiles for optimization, my concern becomes how well they can keep up that support. Anyone burned by application specific support for say SLI in the past will be familiar with this problem. And also how much swing can we expect in different games, will some be really well suited to larger caches and others not, and the same for say performance of different features like rasterization vs ray tracing. I'd be interested in a deep dive on this before purchasing.
 
A deep dive will definitely be interesting - there are so many considerations with an implementation like this which generally end up a bit of a mixed story like with hybrid HDD/SSD caching, etc.

My gut feelings on this (mostly a guess) is it would probably be something like general rule set based on what is best on aggregate for the games we have today, and then application specific tweaks or overrides on top of that where applicable, to squeeze more performance from specific titles that have been optimized. And then probably right at the so called "top end" probably really specific optimization for either really popular AAA titles or things used as benchmarks. My concern of course is the benchmarks and extreme attention paid to them them.

It reminds me of the Geforce FX vs 9700 days where MSAA was becoming important and we saw rotated grid variants and all that jazz, but application detection and application specific changes were started to be seen at driver level, which lead to aggressive optimizing in specific games or benchmarks that reviewers liked to use. I can't help but feel some caution on this one, especially because this one feature is making up for something close to 50% less memory bandwidth than Nvidia which is no joke. If they've nailed this with very general rules about what data to put there and it requires very little attention from software, that's a huge win for AMD.
 
I think we definitely need to see the benchmarks from 3rd partys before crowning anyone that has the best.

If you look at the slides of 6900XT for example that was using the 5900x CPU and then benchmarks by 3rd partys on the 3090's that are using 10900k's and 3900x etc there is discrepancies with what was on AMD's presentation. Obviously the other hardware etc will be different and testing envioronments but i am not so sure the new AMD GPU's are going to have the "crown". Of course it's only going to be small percentages either way on both the 3080/6800xt and 3090/6900xt on who wins what but i dont see this being AMD beat NVidia or NVidia beating AMD.

Of course the 3090 is a lot more money than the 6900XT though!
 
Last edited:
You sit closer to a monitor and it fills a similar field of vision as sitting back on a sofa playing on a TV.

Stop being obtuse.

I'm on a 1440p ultrawide. I'll take it over a 4k TV anyday.

Now if I had a 4k monitor, sure, I can see the argument for fidelity. But not a TV.

gameing on a 25inch screen sitting up close compered to a 55 inch 4k screen is like night and day..esp when at 4k in any case a bigger screen = more immerssion then that you get of a small 1440p moniter.
 
My gut feelings on this (mostly a guess) is it would probably be something like general rule set based on what is best on aggregate for the games we have today, and then application specific tweaks or overrides on top of that where applicable, to squeeze more performance from specific titles that have been optimized. And then probably right at the so called "top end" probably really specific optimization for either really popular AAA titles or things used as benchmarks. My concern of course is the benchmarks and extreme attention paid to them them.

It reminds me of the Geforce FX vs 9700 days where MSAA was becoming important and we saw rotated grid variants and all that jazz, but application detection and application specific changes were started to be seen at driver level, which lead to aggressive optimizing in specific games or benchmarks that reviewers liked to use. I can't help but feel some caution on this one, especially because this one feature is making up for something close to 50% less memory bandwidth than Nvidia which is no joke. If they've nailed this with very general rules about what data to put there and it requires very little attention from software, that's a huge win for AMD.

Yeah - it has me cautious - there is a reason why it tends to be used in a much more limited way normally and won't be the first time someone has thought they've cracked it only for it to struggle to stay relevant as things move on.
 
I mean it's fair, I knew when we saw the Nvidia benchmarks on their "launch" they were spun for RT and 2x boasts but we all knew we'd be looking at more like +35% again on top of last gen, and that honestly was enough for many of us to be excited anyway. So we just have to reserve judgement with AMD and give them the same treatment. Are these benchmarks legit, are they cherry picked, does infinity cache just give them a straight up win or does it need to be optimized for every game? All perfectly valid and open questions.

I do think that AMD does have some RT specific transistors in their CUs now which speed up the RT calculations x10, but it really just depends on how much this translates to in terms of raw RT power, will it be enough to do say basic global illumination of ray traced reflections? Who knows. Their performance wont be on par with like a GTX 1000 series though, that's for sure because raw ray tracing essentially in "software", that is to say just running on unoptimized general purpose transistors would be very bad. Pretty sure they're not doing that.



The 3090 not being a gaming card is fairly well accepted among many of us and the pro reviewers, and long before AMDs lineup appeared, it's not a reflexive statement to their launch. There's even a thread quoting this from Gamers Nexus https://www.overclockers.co.uk/foru...card-gamers-nexus-destroys-the-3090.18900140/ saying it word for word and I fully agree with all their criticisms. It's what put me off buying a 3090 even though I have the cash for such a purchase. And honestly AMDs lineup for this spot is dead on, the 6900XT looks like a fantastic piece of hardware trading blows with a 3090 for much cheaper, right in the kind of preferred price point I'm comfortable at, it has that slight premium in speed but the premium in price won't make your eyes bleed, or at last not mine.

As I said I'm looking to upgrade my rig to have a better gaming experience, I don't do competitive online gaming anymore like I used to back in the day, I don't need 400-500fps in CS with 1ms latency. I just want pretty games. The extra raster performance of both camps is nice but not worth much to me if it doesn't buy me new graphical effects. I want an improvement on last generation and that is now coming more from ray tracing, so that matters to me. And so I want to know if I'm going to get immersed into a world of CyberPunk 2077 that I'm able to both use the RT effects and ideally not be punished down into 1080p resolution from my normally expected 4k. It's an open question if AMD will provide that, IF they can then I'll cancel my 3080 preorder and almost certainly get a 6900 XT. But like I waited to see user reviews of Nvidias lineup I'll be waiting to see AMDs.

I know you think that it's like some kind of mental cope to deal with Nvidia getting crushed or whatever, but it's not. I've been in tech for more than 25 years, I work in tech, I've bought high end video cards almost every generation since before Nvidia even existed, and have owned a plethora of cards from both AMD (and ATI) and Nvidia, and 3DFX etc. I like AMD I've used their cards solo, in crossfire (a pair of 4870's) and in dual GPU single card (5970). Stop being so blindingly jaded, it shows through.

hurr durr/10


Who cares whether it is fairy well accepted by whomever? people use it to play games, the fact we're discussing in on a GPU forum marks that as a red herring or little more than a pedeantic definition. It's ironically playing games to describe it as not a gaming card. If the 90 isn't a gaming card then AMD just took the crown by definition right? But I don't believe that, if the 6900XT and the 3090 trade blows then there's no winner, and the "who has the fastest card" doesn't interest me anyway, that's for e-peeners.

That's what put you off? someone's pedantic description of it? The only thing that would have put me off it was the price, it's ludicrous.

The reason I think you're "coping" is not your RT preferences I can see that, but what made it shine through was trying to crap on AMD's "launch" after NV had the worst launch anyone can remember, and the nonsense about the 3090 not being a gaming card after AMD more or less matched it for 66% of the price.

Both teams are going to spin the hell out of what they're releasing, so neither will prove to be all they're cracked up to be when tested against each other. In the end there's going to be comparable performance across the stacks, I don;t care about a few extra FPS in this games and a few less another one, too nit picky, boring, and not significant. So it will come down to price, features (RT isn't important to some), VRAM (for some), power consumption will be a small factor for some, and whatever floats your boat really. And of course what can you actually buy?

I think it's actually far more healthy not to have one team on top all the way down the stack, and there be competition up and down it. No competition led us to Turing, who wants to see that again?

It was very funny to see all those who said AMD couldn't compete with NV silenced this afternoon though.
 
Me on the other hand find it insane to still game at 1440p, a resolution that even console gamers are now shunning. My 8 year old nephew will be gaming at more than than twice the resolution compared to you in his little console :D

Plus ray tracing will work just fine on RDNA 2.

It appears it will work, but probably at 1/4 or 1/2 resolution while again costing a big performance hit. Remember the units can be used for raster OR RT type ops, not both at the same time.

I find it odd you consider 1440p insane, but accept 1080p RT on your 4k scene?
 
I think we definitely need to see the benchmarks from 3rd partys before crowning anyone that has the best.

If you look at the slides of 6900XT for example that was using the 5900x CPU and then benchmarks by 3rd partys on the 3090's that are using 10900k's and 3900x etc there is discrepancies with what was on AMD's presentation. Obviously the other hardware etc will be different and testing envioronments but i am not so sure the new AMD GPU's are going to have the "crown". Of course it's only going to be small percentages either way on both the 3080/6800xt and 3090/6900xt on who wins what but i dont see this being AMD beat NVidia or NVidia beating AMD.

Of course the 3090 is a lot more money than the 6900XT though!


Does it actually matter who has the absolute crown though? if they're close enough that's enough surely? and then factor in price and whatever else matters to you.

People have been saying on this board since Turing that we needed real competition and we've got that, that's what we should be celebrating IMO.
 
Back
Top Bottom