Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
22%Slightly off topic, I know the 9070XT is pipped to have roughly the same RT performance as the 4070ti, What sort of perfomance bump was there from the 3080 to 4070Ti for ray tracing?
Good to see someone knows how to calculate percentages correctly. It's amazing how often people who should know better get it wrong.
Thanking you, I did try looking for comparison videos but I couldn't find what I was looking for that didn't include other factors.
You have heard my compatriots recently (once again) voted in president is an unhinged reality TV presenter?i keep thinking its just AI fighting itself.
Nobody can be that stupid surely?
The trouble AMD have is they are already on the back foot on GPUs, given the same performance on competing cards just to get equal market share they have to price below Nvidia because of peoples mindset. THe educated could argue that nvidia have an extended feature set but Im sure the general mid-range GPU buying public arent touching them, its just cos people.
It's no different from how it used to be with CPUs, no matter how competitive AMD were "people" bought Intel.
Both AMD and Nvidia will have paid spies in enough places of the supply chain to know each others approximate performance.They still need to clear their old stock so are they holding off for that? Do they really not know the perf of Nvidias 50 series, or do they know that their cards are a bit of a disappointment as well. Nvidia might be struggling to increase raw performance so they focuses on the extra bells and whistles to make up for it; if that's the case then I'm sure AMD would be struggling, too.
I don't understand. Why is it not 18%? I've been staring at the image like it is some sort of brain teaser lol.Good to see someone knows how to calculate percentages correctly. It's amazing how often people who should know better get it wrong.
What is 100 as a percentage of 82?I don't understand. Why is it not 18%? I've been staring at the image like it is some sort of brain teaser lol.
ohhhhhhhhhhhhh. Well don't I feel silly haha.What is 100 as a percentage of 82?
They used to have a lot more headroom (my memory only goes to the tail end of polaris and up, Vega56/64 on 14nm had a lot of headroom as did Radeon VII on 7nm). But as the generations roll on they've made better, more optimised and more specific designs for gaming, and there's less headroom as a result. Even back in the Radeon VII days every card came out of the factory with it's own voltage curve, so they did try and make use of binning for the consumers benefit. However the safety margin was large I think partly to account for compute workloads being much more sensitive to voltage (what might work for gaming might not work for all compute workloads, tl;dr games are poorly optimised and gave the GPU enough of a breather to not crash and burn). Wouldn't be surprised if the safety margin has also reduced greatly with RDNA1/2/3/4 from being more focused on gaming.Aren't AMD cards meant to be a bit better for OC'ing compared to Nvidia or is that something I've just made up in my head ha.
I would like to think so. Most do if it's a unique connector. I'm toying with the idea of upgrading my 750w seasoning PSU. The coil whine is something else when the GPU is ramped up. I initially thought it was my Vega 56 but exactly the same when I changed to the 3080.You know how the ASRock card is meant to come with a different power connector. Will there be an adapter in the box do we think? Just my PSU isn't ATX3.1 so I just use, at the moment, 3x 8-pin jobbies. Not that I am wanting to get ahead of myself here.
That makes sense. Good explanation too.They used to have a lot more headroom (my memory only goes to the tail end of polaris and up, Vega56/64 on 14nm had a lot of headroom as did Radeon VII on 7nm). But as the generations roll on they've made better, more optimised and more specific designs for gaming, and there's less headroom as a result. Even back in the Radeon VII days every card came out of the factory with it's own voltage curve, so they did try and make use of binning for the consumers benefit. However the safety margin was large I think partly to account for compute workloads being much more sensitive to voltage (what might work for gaming might not work for all compute workloads, tl;dr games are poorly optimised and gave the GPU enough of a breather to not crash and burn). Wouldn't be surprised if the safety margin has also reduced greatly with RDNA1/2/3/4 from being more focused on gaming.