• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

Spot on. Hopefully AMD get the message and start to go back to pricing cards properly while trying their best to catch up with Nvidia.

They really need to do this if they want to get market share or just focus on mid and low range then once marketshare is there, focus on high end i.e. have a ryzen moment, problem is, this strategy requires nvidia to **** up, which they are unlikely to.

I think the way they conduct themselves in live shows etc. says a lot too, nvidia don't ever really mention or make comments about their competition where as amd always have to make some kind of remark and/or compare themselves to nvidia, I think this actually does more harm to themselves and their image tbh especially when they end up falling flat on their face come tech release time i.e. "poor volta", "overclockers dream" then making comments about the 4090 fire hazard only to have a worse fire hazard situation themselves with the vapour chambers.... Then things like over promising on FSR upscaling improvements only as evidenced to take 1 step forward but 2 steps back, on the frame gen front i.e. a kneejerk reaction of having it coming to loads of games "very soon", fast forwarding the norm amd timeline of several months or year(s) later to then having it be a **** show with things like doesn't work with adaptive sync, frame pacing issues, anti lag driver injection getting you banned in online games and so on to then requiring another several months to get it to a usable state.

They need to take a step back and rethink their whole image at this point now imo.
 
Nvidia don't ever really mention or make comments about their competition where as amd always have to make some kind of remark and/or compare themselves to nvidia.
This is the benefit of being the market leader - vs. being in second or third place. You don't need to compare yourself when you're in front.

Apple never talks about Windows laptops, they only compare themselves to themselves. Microsoft (and Qualcomm) recently did an entire event comparing their new laptops to the entire Apple range.

It's the same across all industries. I work in homewares and we're the market leader, our trade press never talks about our items/ranges in comparison to our competitors because we don't need to - whereas our competitors focus entirely on us with their material because it's the main point of comparison.
 
Pretty much this.

Had basically every amd gen gpu from the 3850 to the vega 56 but since then the gap has only widened more and more, originally it was mostly DLSS and RT grunt in the handful of games at the start but here we are now where we have:

- ray reconstruction that not only provides better IQ but also slightly better perf. too
- RTX hdr, absolute must have for any HDR fan, so many games with **** native hdr implementation, usually raised blacks which is a no go on oled
- RT grunt/hardware design, basically every game using some form of RT and it shows because ampere is pulling even further ahead than its competiting rdna 2 tech to usually being on par with 7900xt in the ones which use HW RT

Yes, you might pay a "premium" for nvidia kit (much less of a premium nowadays compared to the ati/start of amd days though and depending on how and where you shop....) but at the end of the day, nvidia tech just simply works better. Recently I moved from a gsync ultimate monitor to a gsync compatible one (basically the same kind of monitor aw34dw to aw32) and the gsync module is far superior to adaptive/free/g sync compatible, far superior handling of VRR at <60 fps (I also noticed this with my previous freesync premium screen before moving to the gsync ultimate oled one), it's much smoother, it has considerably less flickering (as proven by rtings, the only oled screen on the market to not face severe flickering issues when < 60 fps thanks to the gsync module, said monitor gets a 9+ rating whilst the freesync version gets a 5 rating for vrr flicker).

The one thing where AMD has arguably matched nvidia now is frame gen (nvidia FG as evidenced by Alex/DF still has the lead from a technical perspective for things like latency, IQ when comparing side by side) as long as devs can implement it well, which is the problem...... Ghost of tshumia is the best AMD FG experience so far followed by using the mod version.



It's much less of an issue for 10gb still, I still have yet to face all these supposed issues (if we're not talking about well regarded broken games on launch day which have been fixed/patched now....) with 10gb at both 3440x1440 and 4k outside of the self sabotaging myself in a certain game and the more legit games being cp 2077 when modded with textures and AW 2 (and even then, having more vram like the 3090 here would not have helped since the grunt is also a problem in these 2 games when maxed) my main problem is lack of grunt as proven now by all the benchmarks out there, 24gb hasn't saved a 3090 outside of a couple of launch day titles (which as noted, were broken and resolved) last year and could be argued even the current gen top end gpus lack grunt without having to resort to upscaling or/and frame gen when playing at high res or/and high HZ, hello hellblade 2.... The good news is that this year things seem much better on the vram front, UE 5 as evidenced have been using low amounts of vram, even less than 8gb dedicated vram, then we have ubisoft moving to a much better engine that properly handles vram management as shown in avatar and explained by Alex and the engine developers for snowdrop engine.

Since getting the aw32 qd-oled 4k, I game more at this res than on my aw34 3440x1440 now where I used DLDSR. Games run and look better at 4k with dlss perf. than they did with native 3440x1440 or DLSS quality or DLDSR 1.78x with dlss perf. (DLDSR has an overhead cost). Of course I'm not getting anywhere near the 240hz/fps no matter what settings etc. are used hence why a 50xx is sorely needed now.

EDIT:

As also noted before too, thanks to things like FG and upscaling, gpus are lasting longer too with being able to max settings, mrk has shown this beautifully with cp 2077 where max raster native 4k is a considerably worse experience from a visual and performance front than using dlss frame gen and upscaling with path tracing, if we didn't have such "cheating" software features, this simply would not be possible on any current gpu nor for the forseeable future either. Funnily the amd/console fans are lapping up this kind of tech now..... ;) :p :D
Ray Reconstruction needs a fairly reasonable load to keep up / give better performance. In CB77, it loses quite a bit of performance just with RT Reflections ON on a 4080.
 
Last edited:
This is the benefit of being the market leader - vs. being in second or third place. You don't need to compare yourself when you're in front.

Apple never talks about Windows laptops, they only compare themselves to themselves. Microsoft (and Qualcomm) recently did an entire event comparing their new laptops to the entire Apple range.

It's the same across all industries. I work in homewares and we're the market leader, our trade press never talks about our items/ranges in comparison to our competitors because we don't need to - whereas our competitors focus entirely on us with their material because it's the main point of comparison.

I suppose the problem is to compare yourself you have to actually deliver too otherwise said company ends up making themselves look like the clown as amd have done a few times now.

Ray Reconstruction needs a fairly reasonable load to keep up / give better performance. In CB77, it loses quite a bit of performance just with RT Reflections ON on a 4080.

Suppose it is scene dependent, most of the time RR performs better for me on the 3080. Although tbf, in the first descendant, ray reconstruction performs worse but I think this is a bug as there were some other issues with RR in this game, may have been fixed though but not checked since release.
 
Surprising this!

Funny that now with AI they want to investigate Nvidia yet for years gamers have had to put up with a monopoly of over 80% and the hiking up of prices well above inflation every generation.
 
Funny that now with AI they want to investigate Nvidia yet for years gamers have had to put up with a monopoly of over 80% and the hiking up of prices well above inflation every generation.

The wonders of what having no competition will do to the market....
 
Have a 4080 Super FE now, the encode/decoder chip definitely had a buff, previously I couldnt play 1440p/60 2160p/60 on youtube without stutters using hardware decoding, now I can.

Also tested F7 remake and wow what an improvement with the extra VRAM. Although I cant help but feel in a few years 16 gig wont be enough.

:)

Lead is too great imo, if AMD manage to get their share back to the 20% range over time then I think they'd be doing well.

Not holding my breath for Intel either to be honest!
 
Surprising this!

surprised the EU hasn't acted yet, they are normally a lot more on it for consumer rights and stuff
 
France raised concerns about CUDA dependency, do they intend to make it open source or force nvidia to provide & support a similar solution for all hardware configurations?
 
Last edited:
Nah, they'll flit between 0-2% imo hardly worth their time. Wouldn't surprise me if they ditch consumer gfx once they've lost a few more billion on it and go totally AI focussed.
Yeah, I think they have the capabily to do it, but it'll propably take them too long to get there for them to maintain interest in the sector. Personally, even if the largest battlemage can come close to XTX/4080 at a lower price that would be a win, but I'm not sure they have the interest for it - as AMD have kinda shown, there's more revenue per die in CPUs.

Edit: that said, whoever manages to get MCM nailed down first will have a strong advantage!
 
Last edited:
Edit: that said, whoever manages to get MCM nailed down first will have a strong advantage!

i was wondering if all things in a game need to be synced realtime, suppose if lighting calculations are lagging by 1-2 milliseconds you could still render the frame with available lighting data, its not going to create noticable artifacts, and then you could further parallelize the load for a proper multichip approach like you have in AI applications

"poor volta", "overclockers dream" then making comments about the 4090 fire hazard only to have a worse fire hazard situation themselves with the vapour chambers

LOLOLOLOLOL
 
Last edited:
Back
Top Bottom