• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

That's correct, currently they are both just as bad. But in the past it was better and AMD had bit bigger market share, however not by much. Advertisement still sucks on their side.

This, competition is good, who knew? Look at Intel and AMD now, competition between them is solid and as a result over the last few years we have had i would argue some of the best CPU's we have had in decades, in CPU's terms its never been better to be alive.

There's been plenty of times in the past where AMD often was considerably better and cheaper then Nvidia on the same "shelf" - it didn't make people rush for AMD cards most of the time, though. Nvidia is just much better known. Same with CPUs Vs intel, as it's not just GPUs.

ATI tried really hard to break in to Nvidia's mindshare, with some limited success, they spent big on R&D and sold them cheap to try and break Nvidia's back, trouble is ATI's mindshare hit a wall, all Nvidia did was wait them out, the GTX 480 was a pig of a card, it was objectively a bad card and ridiculed for it by the tech press at the time, the HD 5870 earned a lot of kudos by the same press, the GTX 480 out sold it. Nvidia's bad card ns AMD good card Nvidia still win.
AMD have tried, despite ATI going bust trying, they have learned that the only way to shift that mindshare is if Nvidia make a bad card and AMD make one 3X as good at half the price, never going to happen.
The RX 7700 XT is objectively a much better card than the 4060Ti, and yet the former doesn't even show up on the Steam Hardware survey, the 4060Ti does and shows up well, same with the 7800 XT, that is a great card and cheaper than the worse 4070.

I think AMD have learned the reality and are resigned to it. You can paint a pile of dog poo green and people will pay £2000 for it. AMD can make an actual GPU and people will buy the green dog poo.
One cannot compete with that.

As far as I'm aware, Dx11 issues were dx11 issues - as in, problem was in DX itself. NVIDIA was able to workaround it with brilliant solution earlier as their GPU scheduler is a software one so they were able to modify it. AMD has had hardware one for ages and it took them munch longer to find a solution for dx11 shortcomings. Dx12 was the opposite and NVIDIA still has much higher CPU overhead which they can't fix without going for hardware GPU scheduler - but then they would lose dx11 advantage, hence this won't change anytime soon.


That's likely down to the dx11 issues mentioned earlier. AMD physically can't make it as good as Nvidia's workaround because of how hardware works. That said, neither vendor is at fault here, dx11 just has core issues (literally - single thread limits).

Yup, i even remember this, AMD launched Mantel, for what ever reason they did that it showed clearly the problem with DX 11, a slight digression but tech jurnoes like Ryan Shrout acted almost like they had no clue about what it was meant to be and with that used very fast CPU's with slow GPU's in single player games to conclude "it does absolutely nothing at all"
It doubled, no joke... my frame rates in BF4 multiplayer.

What a cynical bunch of _____ holes.

In DX12 because of the way AMD's architecture works they are anything up to 35% faster than Nvidia in a CPU bottleneck scenario.

If they pushed for RT now, they will save cost on development but lose on sales - people will quickly ditch such publishers which releases games that just don't work well on their machines. And that would be it. The mainstream gamers don't like to spend much and they dictate what is being widely used.

RT is one of those things where if you push it so hard you only get 35 FPS Nvidia are very much better than AMD, if 60 FPS or higher that very much better Nvidia vs AMD performance is very much diminished.

It pure marketing and too many tech jurnoes play along with it.
 
Last edited:
I should have paid more attention when the 4090 came out, but wasnt looking for a card then. Im going to wait until the 5090 is out, but generally, on any xx90 series, whats generally the better version, the "base" card, the FE version or TI version, and what order do they tend to come out in pls? Am assuming the base version is probably the less capable of the 3 (not thats its not capable!), just unsure where the other 2 sit, and what one i should try to bag as soon as possible when released (which give me months to save!). Using for games mostly, and will need to be capable of running decently for probably several years, given i cant afford to upgrade for several years
 
Usually there is only vanilla xx90 at launch(equivalent of your 980ti), then at mid cycle refresh there is a 90ti(but they have only done it once, did not happen with 2080ti and 4090) - but I wouldnt bother with 90ti -vanilla 90 was enough until next gen.
 
What does Nvidia feel about all this?
Doubt they'll care.....

There are some known limitations though like currently only targeting the ROCm 5.x API and not the newly-released ROCm 6.x releases.. In turn having to stick to ROCm 5.7 series as the latest means that using the ROCm DKMS modules don't build against the Linux 6.5 kernel now shipped by Ubuntu 22.04 LTS HWE stacks, for example. Hopefully there will be enough community support to see ZLUDA ported to ROCM 6 so at least it can be maintained with current software releases." (See Proronix article source linked above)

And so once again there needs to be some attempt by AMD to use the Current HWE Kernel as well as the LTS because that HWE Kernel is needed by AMD hardware that's sometimes 3+ years old as it takes a while to get AMD's hardware fully Kernel Supported as we all Know! So there are still the issues that Phronoix had with ROCM/HIP as in earlier articles on ROCm/HIP and iGPU and dGPU compute API support for Radeon GPUs!

Also:

"ZLUDA on AMD GPUs still share some of the same inherent issues of ROCm in the officially supported hardware spectrum not being as broad as NVIDIA with their all-out CUDA support. Or the matter of ROCm largely catering to the major enterprise Linux distributions and aside from that the ROCm software support is basically limited to community efforts elsewhere. So the same headaches of installing/using ROCm are involved with the ZLUDA route as opposed to a relatively care-free experience on any Linux distribution and any (recent) GPU if using the NVIDIA proprietary driver
 

Looks like amd have performance issues in this one.

a7OtRLk.png


bhc1f3j.png


DLSS has ghosting, but also visible benefits​


However, DLSS is not perfect, because as recently found more often, Nvidia's technology has to struggle with massive ghosting in some smaller image objects. Primarily, this circumstance affects flying objects such as leaves, which then immediately lead to visible streaks through the picture. This does not happen throughout and is not particularly disturbing, but should not happen so.



FSR works better than in many other games​


With Ghosting, FSR has no problems, but there are the usual difficulties in terms of image stability and disocclusion flickering. However, Banishers shows that Ghosts of New Eden shows quite a good implementation of upsampling technology, compared to many other games, FSR performs pretty well. Especially in Ultra HD, even with FSR “Performance” you still get a proper image, which is clearly superior to the same number of pixels.

Banishers: Ghosts of New Eden supports Nvidia DLSS Super Resolution and Frame Generation version 3.1.13.0 and AMD FSR Super Resolution version 2.2. The new FSR 3 frame generation, on the other hand, is not offered, as is the usually good TSR of the Unreal Engine and Intel's XeSS. Super Resolution is the focus of this page, and the editorial team will not go down at this point on frame generation.


AMD FSR and Nvidia DLSS both have their pros and cons in Banishers: Ghosts of New Eden. As usual, DLSS is the better technology and manages to calm the image significantly better than FSR, especially with aggressive settings and low resolutions. Apart from image stability, the reconstruction also works better.

Especially in Ultra HD, upsampling should be used​


Compared to the native resolution, there is an equal image quality with FSR in Ultra HD with various advantages and disadvantages, while DLSS conjures up a slightly better quality on the monitor – at least as long as the partly massive ghosting does not bother. In addition, the game own TAA also has to struggle with graphic errors that neither DLSS nor FSR have. If the rendering resolution has to be reduced due to performance problems, upsampling with both DLSS and FSR is clearly an advantage over a simple reduction in resolution.

Sounds like it might be worth swapping the dlss preset from D to C.
 
Looks like a good performance on Nvidia, its using UE 5 if I remember correctly, most of the earlier titles using it were struggling. The game is a pleasant surprise too.

Yeah either amd are lacking or/and nvidia have figured out how to get more from UE 5 or/and game devs are optimising for nvidia better as this has been the case for a few of the latest UE 5 games now e.g.


Devs also appear to have figured out how to optimise for vram with UE 5 now as well!
 
Yeah either amd are lacking or/and nvidia have figured out how to get more from UE 5 or/and game devs are optimising for nvidia better as this has been the case for a few of the latest UE 5 games now e.g.


Devs also appear to have figured out how to optimise for vram with UE 5 now as well!
Good to see. Nv has some tricks up its sleeve compared to amd, SER for example, maybe this is what pulls them ahead.
 
Back
Top Bottom