The 6800XT vs 3080 benchmark was with rage and the shared CPU thing OFF. With it on the 6800XT stomps on the 3080. No OCing in the comparison here.
no it did not. It had both settings enabled and it was on overclock mode.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
The 6800XT vs 3080 benchmark was with rage and the shared CPU thing OFF. With it on the 6800XT stomps on the 3080. No OCing in the comparison here.
Right now their only real gaming feature advantage is dlss, as for speed in games and drivers, how are you coming to that conclusion considering the cards aren't even out yet?
My mate said "shame amd drivers are ****". I asked him when he last had an AMD card. Think he had to go back 6 years.
Exactly this.
It's good to see Nvidia have competition again but unless AMD can counter DLSS (something in the works is not enough) then any big title will perform better on the 3000 series because you can be sure Nvidia will pay the devs to support it. On top of this you have AMD's implementation of RT running just in the normal pipeline. We all know how that performs on a 1080ti and that's why Nvidia used tensor cores to offload the RT calculations on the 2000 and 3000 series.
All this is speculation though, time will tell when we see some real reviews with a wide range of benchmarks.
It's great AMD have bounced back though, competition will drive down the prices for all of us going forward so what ever happens it's win win
If that's the best possible spin you can put on it then AMD have had a very good day indeed. I really wouldn't have gone with the "The 3090 isn't really a gaming card" routine though, it's used for gaming by gamers, it's a gaming and prosumer card. That reeked a bit of desperation. Also throwing shade at AMD's "launch" after NV's absolute cluster***k of a launch from Jensen's kitchen to the present day where people still haven't got cards they paid for on day 1 isn't a good look either. Obviously you did emphasise RT as that's clearly the fig leaf du jour.
5/10.
https://www.extremetech.com/gaming/316522-report-nvidia-may-have-canceled-high-vram-rtx-3070-3080-cards#:~:text=Micron's yields on GDDR6X are also reportedly poor.&text=GDDR6X is faster-clocked GDDR6,stands for picoJoules per bit).
They say it's a ram yield issue =/ Seen other places saying the same thing.
That's a big advantage a lot of future games will have it Nvidia will make sure of it now they have some competition.Right now their only real gaming feature advantage is dlss, as for speed in games and drivers, how are you coming to that conclusion considering the cards aren't even out yet?
I swear scares me the high post count some people have and the drivel they spout clearly zero understanding of the infinity cache
PS Infinity Cache will have been designed in against current games and an estimation of the future but they currently only have a 50-60% hit rate with current titles and that will go down with future titles and unlike with console development games on the PC by and large won't be designed to optimise usage of it.
One of my major questions is on Infinity cache. Obviously not all data is equal, some is read from the vRAM with regularity which makes it better to cache on chip if you can, where as some is read very rarely which makes it basically pointless to cache. And while infinity cache is a LOT of cache vs other caching systems it's not a lot of data vs the total vRAM size. So what you put in there is going to be extremely important with regards to the improvement you get. I'm extremely interested in how this is decided as we don't really seem to know right now. As I think you're suggesting it might be general rules regarding typical usage we see of games today, is that right? Or could it be like a machine learned algorithm that decides? Or is this going to be a per-profile application decided thing which requires constant attention to games. Could there be some games that get huge benefits from this and others that don't? I think these are all valid questions we wont have answered until we can benchmark the hardware against a suite of different games.
I can't imagine it won't have some per-application/scenario learning of most common access patterns - but generally it will be targeted for certain things like lightmaps which might update dynamically, etc. (although mutability is generally a consideration with these kind of caches) and other assets that are known by nature to be bandwidth heavy.
Depending on how it is implemented the interesting thing here is that some of these are also highly compressible i.e. largely grey scale data used for various types of light and height mapping, etc.
I'd be interested in a deep dive on this before purchasing.
A deep dive will definitely be interesting - there are so many considerations with an implementation like this which generally end up a bit of a mixed story like with hybrid HDD/SSD caching, etc.
You sit closer to a monitor and it fills a similar field of vision as sitting back on a sofa playing on a TV.
Stop being obtuse.
I'm on a 1440p ultrawide. I'll take it over a 4k TV anyday.
Now if I had a 4k monitor, sure, I can see the argument for fidelity. But not a TV.
My gut feelings on this (mostly a guess) is it would probably be something like general rule set based on what is best on aggregate for the games we have today, and then application specific tweaks or overrides on top of that where applicable, to squeeze more performance from specific titles that have been optimized. And then probably right at the so called "top end" probably really specific optimization for either really popular AAA titles or things used as benchmarks. My concern of course is the benchmarks and extreme attention paid to them them.
It reminds me of the Geforce FX vs 9700 days where MSAA was becoming important and we saw rotated grid variants and all that jazz, but application detection and application specific changes were started to be seen at driver level, which lead to aggressive optimizing in specific games or benchmarks that reviewers liked to use. I can't help but feel some caution on this one, especially because this one feature is making up for something close to 50% less memory bandwidth than Nvidia which is no joke. If they've nailed this with very general rules about what data to put there and it requires very little attention from software, that's a huge win for AMD.
I mean it's fair, I knew when we saw the Nvidia benchmarks on their "launch" they were spun for RT and 2x boasts but we all knew we'd be looking at more like +35% again on top of last gen, and that honestly was enough for many of us to be excited anyway. So we just have to reserve judgement with AMD and give them the same treatment. Are these benchmarks legit, are they cherry picked, does infinity cache just give them a straight up win or does it need to be optimized for every game? All perfectly valid and open questions.
I do think that AMD does have some RT specific transistors in their CUs now which speed up the RT calculations x10, but it really just depends on how much this translates to in terms of raw RT power, will it be enough to do say basic global illumination of ray traced reflections? Who knows. Their performance wont be on par with like a GTX 1000 series though, that's for sure because raw ray tracing essentially in "software", that is to say just running on unoptimized general purpose transistors would be very bad. Pretty sure they're not doing that.
The 3090 not being a gaming card is fairly well accepted among many of us and the pro reviewers, and long before AMDs lineup appeared, it's not a reflexive statement to their launch. There's even a thread quoting this from Gamers Nexus https://www.overclockers.co.uk/foru...card-gamers-nexus-destroys-the-3090.18900140/ saying it word for word and I fully agree with all their criticisms. It's what put me off buying a 3090 even though I have the cash for such a purchase. And honestly AMDs lineup for this spot is dead on, the 6900XT looks like a fantastic piece of hardware trading blows with a 3090 for much cheaper, right in the kind of preferred price point I'm comfortable at, it has that slight premium in speed but the premium in price won't make your eyes bleed, or at last not mine.
As I said I'm looking to upgrade my rig to have a better gaming experience, I don't do competitive online gaming anymore like I used to back in the day, I don't need 400-500fps in CS with 1ms latency. I just want pretty games. The extra raster performance of both camps is nice but not worth much to me if it doesn't buy me new graphical effects. I want an improvement on last generation and that is now coming more from ray tracing, so that matters to me. And so I want to know if I'm going to get immersed into a world of CyberPunk 2077 that I'm able to both use the RT effects and ideally not be punished down into 1080p resolution from my normally expected 4k. It's an open question if AMD will provide that, IF they can then I'll cancel my 3080 preorder and almost certainly get a 6900 XT. But like I waited to see user reviews of Nvidias lineup I'll be waiting to see AMDs.
I know you think that it's like some kind of mental cope to deal with Nvidia getting crushed or whatever, but it's not. I've been in tech for more than 25 years, I work in tech, I've bought high end video cards almost every generation since before Nvidia even existed, and have owned a plethora of cards from both AMD (and ATI) and Nvidia, and 3DFX etc. I like AMD I've used their cards solo, in crossfire (a pair of 4870's) and in dual GPU single card (5970). Stop being so blindingly jaded, it shows through.
hurr durr/10
Me on the other hand find it insane to still game at 1440p, a resolution that even console gamers are now shunning. My 8 year old nephew will be gaming at more than than twice the resolution compared to you in his little console
Plus ray tracing will work just fine on RDNA 2.
I think we definitely need to see the benchmarks from 3rd partys before crowning anyone that has the best.
If you look at the slides of 6900XT for example that was using the 5900x CPU and then benchmarks by 3rd partys on the 3090's that are using 10900k's and 3900x etc there is discrepancies with what was on AMD's presentation. Obviously the other hardware etc will be different and testing envioronments but i am not so sure the new AMD GPU's are going to have the "crown". Of course it's only going to be small percentages either way on both the 3080/6800xt and 3090/6900xt on who wins what but i dont see this being AMD beat NVidia or NVidia beating AMD.
Of course the 3090 is a lot more money than the 6900XT though!