• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
I did compromise settings on my RTX 2080 to get it to play nice. If I still have to do compromises on RDR2 and HZD with the newest AMD GPU, I might as well just stick with my 2080.

So although from your situation of a Vega 56, I understand the thought pattern.. if I need to start fiddling with setting on my NEW GPU, I might as well not upgrade as the 2080 and 1080tis are ok... not amazing but they're not a gigantic gap away from the 2080ti performance.

Agree. Check the tests people are doing before committing. You should be able to see if them games are good to go at that res.
 
blinkers2.jpeg

Agreed!
 
I don't know why AMD bother stoking up all this hype about a year before a GPU is released... Especially if they aren't exactly sure when they can release planned GPUs. Maybe they are just trying to encourage investors to buy shares?

Why not (as much as possible) just keep it all a secret until a couple weeks before the launch? I'd be a nice surprise and stop all this gossip and unfounded rumour spreading.
 
NVIDIA have done so much wrong in this launch.
-Questionable decisions by AIBs cheaping out on components

TBF to the AIBs they just used Nvidia's reference designs. Everyone seems to be saying they cheaper out when it appears that Nvidia made a reference design that isn't cable of running their down hardware. Says a lot imo when Nvidia choose not to use their own reference design
 
Not about GPUs - But, who is gonna get a Zen 3 CPU (presumably similar to standard 7nm fab process) this year? Assuming they can reach 5ghz that is.

I'm going to ride it out on my 4770K for a while longer, probably at least until Alder Lake 10nm desktop CPUs and 7nm AMD CPUs are released (with DDR5 RAM). I believe using this CPU will reduce FPS in games by about 5-10 FPS in some games, at 4K resolution.
 
I don't know why AMD bother stoking up all this hype about a year before a GPU is released...
With all your analysis of die sizes, transistor densities and GCN, I presume you've missed the wee point that AMD haven't hyped a damn thing? This is classic hype train from a userbase that desperately wants to see AMD kick some ass. In fact, weren't you among the vocal group decrying AMD taking so long to say anything and not release information the second Nvidia paper launched their latest screw up?

AMD haven't said a damn thing. Can't stoke hype if you don't say anything.
 
weren't you among the vocal group decrying AMD taking so long to say anything and not release information

Alas, it's true. I'd rather they literally said / confirmed nothing about RDNA2 GPUs (including consoles), rather than just say we're working to our own schedule etc, and please don't hate/kill us, just wait a liiittle bit more :D.
 
Last edited:
This comment needs to be pinned somewhere. A very accurate analysis of Nvidia's masterful pricing strategy, yet so many will either ignore or flat out deny this despite it all being so simple to follow. The 3080 will continue to be thought of as a £650 product by the very same people who think AMD should deliver a superior card and price it below £600! You have to hand it to Nvidia, they couldn't have played this any better. Despite witnessing its disastrous launch, many still see the 3000 series as good value for money (based on the fake MSRP) and continue to spread delusional messages such as "AMD isn't launching soon enough and everyone is buying 3080s".


The majority of the cards I have seen fall within $50 of MSRP including at least one from each major brand at the $699 mark.

That is worlds better than the 2080Ti's pricing.
 
The majority of the cards I have seen fall within $50 of MSRP including at least one from each major brand at the $699 mark.

That is worlds better than the 2080Ti's pricing.

Wow seriously? Checked your post history abd its clear your an Nvidia fan but the cheapest 3080 on OCUK is £690 thats £40 over MSRP, then they nax out at £900... Thats £250 over MSRP, infact tge next cheapest is £720, thats £70 over MSRP.

Take your head out if the sand, and peddle your tripe elsewhere
 
Wow seriously? Checked your post history abd its clear your an Nvidia fan but the cheapest 3080 on OCUK is £690 thats £40 over MSRP, then they nax out at £900... Thats £250 over MSRP, infact tge next cheapest is £720, thats £70 over MSRP.

Take your head out if the sand, and peddle your tripe elsewhere

WTF are you on about?

I have railed against Turing from the moment it was launched. Turing's price/performance was crap. That's why I skipped it. Even when I got a Reverb and my 2080Ti started to struggle, I REFUSED to pay $1200 dollars for performance that was only worth about $700 as a run-of-the-mill generational performance bump over my 1080Ti.

I'm looking at Newegg, and the prices range from MSRP for a bunch of card, to $810 for a factory OC FTW....about $110 spread.
 
The estimates here:
https://www.techpowerup.com/gpu-specs/radeon-rx-6800-xt.c3694
https://www.techpowerup.com/gpu-specs/radeon-rx-6900-xt.c3481

actually seem quite reasonable. It suggests that the 6800 XT would be limited to an increase of ~50% increase in performance (plus extra for clock rate and IPC increases) vs the 5700 XT, due to having just 64 ROPs (similar to RTX 3070).

And the 6900 XT would be limited to an increase of ~75% increase in performance (plus extra for clock rate and IPC increases), due to having potentially 96 ROPs (similar to the RTX 3080).

I think the memory bus widths look a bit low, surprised the estimates aren't more in line with next gen consoles, but I don't think this is the main perf. factor.

Also, I think the 6/8mb of cache is more is probably meant to read as L1 cache, rather than L2. Isn't L1 cache more potent / lower latency than L2?

P.S. I personally don't think its a particularly good idea for AMD to compete with the RTX 3090 with a GPU above the 6900 XT. This card has more ROPs and higher production costs, is it worth the cost and effort for AMD? Overclocked versions of the 6900 XT might come close anyway.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom