• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The RT Related Games, Benchmarks, Software, Etc Thread.

Status
Not open for further replies.
The 4070 is around 50% performance of a 4090, last you got 58% of a 3090 for £369 so for a 4070 your paying 45% more for 8% less performance. even taking into account UK inflation which has been higher than most places a 4070 should cost no more then £436
Sorry dude none of this % of previous gen means jack to me, I couldn't care less. I didn't have the previous generation.

I went from a RX 580 8GB... So my point still stands for what it's capable of and how cheap the build cost me vs the budget I allowed/how some hardware was actually cheaper than the quad core build I built in 2008ish... And certainly very good value vs if I'd bought my MITX 2nd rig brand new, cause of the inflated price of MITX components!

FWIW, my mechanical 3.5" HDD in 2008ish would have cost me more than my pcie-4.0 nvme I have now, as would have the case I had, the psu, the ram, cpu is IIRC around the same or a bit more, motherboard was more back then...

So I think I did alright this time coming £384 under budget and buying myself a new mouse/32" 1440P 165hz monitor :) I could have gone AM5 but it didn't suit my needs neither did a X3D this time, I just wanted cool running, silent, air cooled, I achieved that. And it plays anything I throw at it at max settings.

Not too shabby for £384 under budget and cheaper for around 80% of my components vs the last brand new build I did in 2008ish with a q6600 stepping first gen quad core... (as the2nd build which is MITX I have is 2nd hand as I say)

FWIW though, This is just my own experience, no trolling/offense intended. Just a review of what I built for my own needs, which I personally think I achieved, especially given the bonus monitor/mouse, bar the fact I choose to just keep my favoured old keyboard, I think this is definitely a 'full system' at this point, as I only use an ancient 2010 ally imac wired full size keyboard (I just like the laptop keys it has/full size numpad etc, as the keys match my 2 macbook pro's, it's built like a tank, so no reason to change tbh, like a wireless mouse, but a keyboard I don't see the need for wireless/or to upgrade)
 
Last edited:

PSA: Alan Wake II Runs on Older GPUs, Mesh Shaders not Required


by btarunr Today, 09:34 Discuss (33 Comments)

"Alan Wake II," released earlier this week, is the latest third person action adventure loaded with psychological thriller elements that call back to some of the best works of Remedy Entertainment, including "Control," "Max Payne 2," and "Alan Wake." It's also a visual feast as our performance review of the game should show you, leveraging the full spectrum of the DirectX 12 Ultimate feature-set. In the run up to the release, when Remedy put out the system requirements lists for "Alan Wake II" with clear segregation for experiences with ray tracing and without; what wasn't clear was just how much the game depended on hardware support for mesh shaders, which is why its bare minimum list called for at least an NVIDIA RTX 2060 "Turing," or at least an AMD RX 6600 XT RDNA2, both of which are DirectX 12 Ultimate GPUs with hardware mesh shaders support.

There was some confusion among gaming online forums over the requirement for hardware mesh shaders. Many people assumed that the game will not work on GPUs without mesh shader support, locking out lots of gamers. Through the course of our testing for our performance review, we learned that while it is true that "Alan Wake II" relies on hardware support for mesh shaders, the lack of this does not break gameplay. You will, however, pay a heavy performance penalty on GPUs that lack hardware mesh shader support. On such GPUs, the game is designed to show users a warning dialog box that their GPU lacks mesh shader support (screenshot below), but you can choose to ignore this warning, and go ahead to play the game. The game considers mesh shaders a "recommended GPU feature," and not a requirement. Without mesh shaders, you can expect a severe performance loss that is best illustrated with the AMD Radeon RX 5700 XT based on the RDNA architecture, which lacks hardware mesh shaders.
3fPxjpf.jpg


6N71DVX.jpg

In our testing, at 1080p, without upscaling, the RX 5700 XT performs worse than the GeForce GTX 1660 Ti. In most other raster-only titles, the RX 5700 XT with the latest AMD drivers, is known to perform about as fast as an RTX 2080. Here it's seen lagging behind the GTX 1660 Ti. It's important to note here, that the GTX 16-series "Turing," while lacking in RT cores and tensor cores from its RTX 20-series cousin, does feature hardware support for mesh shaders, and is hence able to perform along expected lines. We have included a projection for how the RX 5700 XT fares typically in our testing—it ends up roughly around the performance region of the RTX 3060 and RX 6600 XT. AMD's Radeon RX 6000 series RDNA2 and current RX 7000 series RDNA3 fully support hardware mesh shaders across all GPU models.

That doesn't mean that RX 5700 XT delivers unplayable results. 1080p at 60 FPS is in reach with lowest settings, or at close to maximum settings with FSR Quality, which is not such a terrible tradeoff, just you still need to make compromises. We didn't spot any rendering errors or crashes.

Once we knew that RX 5700 XT works, we also wanted to test the NVIDIA side of things. Using the GeForce GTX 1080 Ti "Pascal" , the flagship GPU from that generation, we were greeted with the same warning dialog as the RX 5700 XT—that the GPU is missing support for mesh shaders. Not only does the GTX 1080 Ti vastly underperform, but it yields far worse performance than the RX 5700 XT, nearly 2-3rds. At launch, the RX 5700 XT was a little bit slower than the GTX 1080 Ti in our reviews of the time, but has climbed since, and is now a tiny bit faster. Since the card lacks DLSS support, using FSR is the only option, but even that can't save the card. Running at 1080p lowest with FSR 2 Ultra Performance yielded only 27 FPS.
 
Oh, the "being the best" excuse. Sure, that works, too, amongst others.
I think you're missing the point that the 4080 performance gap to the 4090 has doubled compared to the 3080 and 3090 yet the price gap has halved so you're paying a lot more and getting significantly less.
 
Keep in mind that the 4070 is also the best case value-wise, but even that if you look at it is not really faster than a 6800 XT in raster, retailed at the same price (3 years later), and has 4 GB less vram. But you do get the Nvidia niceties over it (re RT performance & DLSS et al). Even compared to the 3080 it's really just +2 GB Vram +Frame Gen, 3 years later. And that's THE BEST value proposition Nvidia has put forth (excl. the 4090). If we look at any other card then it's nothing but a bloodbath. And btw this is without taking into account the nice returns of mining while sleeping for previous gen, which you may argue is an oddity but nonetheless was a real boon for anyone with a GPU (so the real value of that generation was even higher; we can argue if it's "fair" to think of that as a value-added or not).

So, don't misunderstand me, the cards can still put in a lot of work so you can enjoy using one with 0 issues, but if we compare generational progression and pricing then it's clear Nvidia chose to keep profit margins super high and just sell on brand value, and because they have no real competition then why wouldn't they? If you look at the 4090 then it all makes sense, that one's a behemoth and demolishes the 3090, but everything else is pretty much running in place for the most part. There's just no other way that your flagship card has insane price/perf. if you aren't also selling gimped cards below it, there's diminishing returns on performance as you scale up.
I kind of get what you're saying but none of that is relevant to me, as I went from a RX580 8GB 2nd hand MITX rig I was casually using as a hackintosh dual boot on my tv playing emulation and older titles...
So no offense intended but what last gen or the gen before did means nothing to me as I don't upgrade each generation, I had another £400 I could have spent on my budget and gone AM5 but I didn't see the point, same as I didn't choose a 5800X3D as I wanted a cool/silent running rig that sipped power which paired with my undervolt, I've achieved, and without watercooling. So for my needs it suits me well.

Regarding the whole 3 year old RX 6800XT vs 4070 thing though, for someone like me who hasn't got a 5700XT/6600/6600XT/6700/6700XT/6800 to upgrade to a 6800XT, it'd need to be as you say worth the jump, so me and a friend did the following experiment, as we'd build new builds at the same time, thanks to a sale from the same supplier... So check this:
We both bought a 5700x/32gb corsair lpx/850w PSU, he buys a 6800XT, I buy my 4070...
In EVERY game we ran both at the same res/settings 1440p, mine used between 2.3-3.6gb LESS actual vram than him...

Wether that's down to some sneaky compression/better optimisation for Nvidia/more favoured coding wise for Nvidia hardware? I've no idea, but I don't care, as the reality is simple, if I use around 3-4gb less than him actual usage not allocated then that means the remaining amount of vram for both cards is the same remaining amount to be allocated IF required! - Which TLDR they never went higher than leaving 3.3-4gb vram left on either card, so win win on both cards regardless of total vram to start with :) See my point?

So thus both cards would 'run out' in the future when each other do... Only difference is I can switch DLSS3.5/whatever future version on and Frame Generation, and he can't...
As others have shown, you gain around 50% extra fps with FG turned on, so cannot loose, you also use less VRAM with DLSS3.5 on, and even @mrk has shown in earlier posts, is only using 11ish gb maxing out a 4090 at 4k in Alan Wake 2...

No games I've found have used more than 9.6gb and that's with RT on maxed graphic settings in 1440p... Most stuff uses WAY less!

As I say, it's probably down to the heavily influenced/biased/paid off Nvidia backhanders for devs, but it works for me if it means my physical remaining vram amount matches what a hungrier 6800xt is using... Means I'm not missing out on the 3-4gb extra my mates card is basically wasting/using to take up the slack of being less optimised hardware for said games.

I did heavily consider a 6800XT though, but it's what 3 years old pretty much and the extremely low power usage of the 4070 - especially when undervolted (I use 115-135w or 145w max with RT on) along with the modern day feature set of the 4XXX series, for what £80-100 more at the time than a 6800XT, was a no brainer to me, as I'd seen what DLSS3 onwards could do as with Frame Generation - something my friend will never do, thus probably won't have the future proofing fallback I will have when settings/res start to chug... Also from my experience with DLSS on I use less vram again too, so win win. Having tried FSR on both my 580 8GB and my 4070, it's not even questionable which is better, let alone when paired with FG...

FWIW though, This is just my own experience, no trolling/offense intended. Just suited my needs well, seeing as all I had was a 2nd hand MITX build with a 3500x/rx580 8gb/2666mhz 16gb/500gb pci-e 3.0 nvme/450w mitx psu), so the jump was BIG!
 
Last edited:

xDGH0J6.png


gjeJtuc.png


hBJGjdt.png


There is no indication that VRAM consumption is too high​

ComputerBase has not conducted extensive research on VRAM consumption. However, there is no indication that this is particularly high. Without ray tracing, a 10 GB graphics card seems to be on the safe side; with rays, this applies from 16 GB at the latest. A level lower still seems to run well, 8 GB without ray tracing and 12 GB with ray tracing runs absolutely problem-free, at least for a short playing time.
 
at least for a short playing time.They might of been honest though, ahem.
We've already seen @mrk with a 4090 show it maxed out and what it uses vram/resources wise and what it puts out performance wise, couldn't care less about random journos opinions.
People who actually have bought it on here will tell you the truth more as they'll happily slate it if it's a mess trust me! :D :cry:
 
Last edited:
Last edited:
Status
Not open for further replies.
Back
Top Bottom