• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Close this thread please, it's old and tired :)

Status
Not open for further replies.
I just drove around for about 20 mins in game, system RAM usage topped out at 12.023GB on 4K, ultra textures, RT medium.

VRAM usage was 7947MB, so it looks like it was probably running at full utilization.

Weirdly, the stuttering issue didn't occur this time, gonna do some more testing.
maybe game is smart and lowers texture in places where it does not matter much?

godfall's approach seems horrible. i would take horizon zero dawn's approach instead...
 
Ok, got a slow down to around 30 FPS for a few seconds, but the game didn't freeze this time.

I don't think it's effected by desktop resolution or GPU overclock. I'll try with GPU OC disabled.

Another change I made before I starting playing today was setting the power profile to maximum in the Nvidia control Panel...
 
Last edited:
Yup, 20-30 second freeze definitely still happening. It happened when I was about half way across one of the Thames bridges going towards Big Ben + Parliament.

All GPU overclocking disabled.

The game reports upto 8.39GB VRAM estimated usage.
 
problem with full vram is usually it's really hard to manipulate it actually when the game really has "high" baseline vram usage

sometimes settings are meaningless and barely affect the vram usage sadly
 
Video ram effectively does have a swap system, open up task manager and go to the gpu tab.

On my system this is what it displays.

Dedicated GPU memory 10.0GB
Shared GPU memory 16.0GB (basically the swap space)
Gpu memory 26.0GB (combined capacity).

I have seen this utilised in FF15, bare in mind windows can use physical ram allocation into a pagefile which can cause further slowdown if you in the region of using the shared memory portion. Its not just igpu's that can use system ram.
 
For the avoidance of doubt, I have 16gb of RAM installed, so running the game without a page file enabled really shouldn't crash the game, if VRAM capacity is exceeded.

Or, maybe it's a feature ;)

16 gig isnt a lot these days, especially with no pagefile.

I have the game in my steam library so could test it with 32 gig ram and no pagefile.

Its a hard place as Windows has really awful virtual memory management.

In BSD pagefile is only used to avoid OOM, which is how it should be in my opinion.
Linux is nearer to windows behaviour but "is" tunable to make it behave like BSD.
Windows will swap stuff out when you not even using much more than half of your ram, it heavily favours keeping cache big over not swapping out data, and this is why many people disable their pagefile as its really dumb behaviour. You cannot tune the behaviour in windows, the only control you have is the size of the swapfile and turning it on/off.
Also windows 10 has a compressed pagefile feature, it is known to cause issues if a system is repeatedly paging things in and out. It can be toggled.
There is no clear win decision really, as "some" software is designed to fail if pagefile is not present. The only ultimate solution is maybe massively over provisioning ram so you never end up using more than maybe 25%, which would prevent most swapping.
 
Last edited:
For the avoidance of doubt, I have 16gb of RAM installed, so running the game without a page file enabled really shouldn't crash the game, if VRAM capacity is exceeded.

Or, maybe it's a feature ;)

No one has really answered why the game crashes with the pagefile off, shouldn't the game entirely use RAM and VRAM, rather than the page file to avoid this problem (unless RAM is fully utilized).

It would need a deep dive really to know what the game is doing - quite a lot of games seem to allocate a small amount of space on the pagefile for whatever reason i.e. GTA5 utilises around 300MB IIRC and doesn't really like it if the pagefile isn't configured - but in many cases don't actually seem to touch the pagefile during operation. (I'm not sure if this is the game doing something directly/indirectly that causes this or whether it it just how the OS works with regard to commit).

Also while it is best for most people just to leave it on system managed size Windows doesn't really use the pagefile very well and most of the tweak guides for it are wrong. If you do manually configure it then you want to set minimum size to at least 1024MB (higher doesn't necessarily help much depending on your setup but specific setups might benefit from a higher minimum, below 1024MB can conflict with how Windows works in some situations) and maximum depends on your utilisation - generally most games will run fine with a max of 8192MB but people running lots of RAM and/or lots of applications that use a lot of memory may need to set it higher. On a mechanical HDD, especially with low system RAM, you might find setting it statically to 1.5x your RAM amount a benefit (or higher if your application workload requires).
 
The Houses of Parliament + Big Ben look much much better with Ultra Geometry set. At 4K, a lot more details are noticeable at a distance with the sharpness at 100%.

There's definitely an argument for 100% sharpness to be default on 4K...

DLSS Quality mode seems to get rid of any horrible artefacts caused by ramping up the sharpness :)

If you set Geometry to very high, Big Ben's clock face is missing lol. What a disgrace :p
 
Last edited:
The game seems to be working nicely now, with these settings:
  • 4k + DLSS Balanced (best visual quality vs lower resolutions + DLSS settings)
  • Ultra texture resolution
  • Ray tracing - medium
  • Vsync off (for now)
  • In game Frame rate limit of 60 FPS
I haven't had a single freeze / FPS drop in about half an hour of driving around the City on a bike. The reported in game VRAM budget remains the same as before, at 8.39GB...

Either something small has changed with my game config, or the frame rate limit is somehow preventing the gameplay freezes... I noticed it helps to reduce the GPU utilization a bit (below constant 100% with RT medium enabled).

Tip - I found out that DLSS balanced generally looks better (at least to my eye), because the antialiasing on DLSS quality doesn't work on some objects like fences (at a distance). I think DLSS balanced is just more optimised at the moment.

Time to enjoy the game finally... I can eek a little more performance later if my motherboard gets a resize bar update.
 
Last edited:
The game seems to be working nicely now, with these settings:
  • 4k + DLSS Balanced (best visual quality vs lower resolutions + DLSS settings)
  • Ultra texture resolution
  • Ray tracing - medium
  • Vsync off (for now)
  • In game Frame rate limit of 60 FPS
I haven't had a single freeze / FPS drop in about half an hour of driving around the City on a bike. The reported in game VRAM budget remains the same as before, at 8.39GB...

Either something small has changed with my game config, or the frame rate limit is somehow preventing the gameplay freezes... I noticed it helps to reduce the GPU utilization a bit (below constant 100% with RT medium enabled).

Tip - I found out that DLSS balanced generally looks better (at least to my eye), because the antialiasing on DLSS quality doesn't work on some objects like fences (at a distance). I think DLSS balanced is just more optimised at the moment.

Time to enjoy the game finally... I can eek a little more performance later if my motherboard gets a resize bar update.
i hope you can check out godfall at 4k sometime

the more i think the more i liken the card to gtx 770

i was drawing conclusions that it would end up like how 4 gb gtx 980/970 did but my thoughts started to change for the worse
 
The Houses of Parliament + Big Ben look much much better with Ultra Geometry set. At 4K, a lot more details are noticeable at a distance with the sharpness at 100%.

There's definitely an argument for 100% sharpness to be default on 4K...

DLSS Quality mode seems to get rid of any horrible artefacts caused by ramping up the sharpness :)

If you set Geometry to very high, Big Ben's clock face is missing lol. What a disgrace :p
yeah, i use %50 sharpen for cyberpunk at 1080p dlss quality, and image quality becomes superb, without any big compromises

dlss and sharpen really plays along so well. watch dogs legion actually comes with pre-defined sharpener profile in nvidia control panel, did you know? :) but you can always bump it up more of course

i agree with %100 details btw, i tried the game on demo and city becomes really more refined and detailed at %100. my cpu managed to stay around 40 fps and i enjoyed cruising around london.

https://imgsli.com/NDY2NTE
 
I take back what I said, it looks like 8GB VRAM isn't quite enough (by around 400MB VRAM for WD: Legions).

While playing Watch Dogs: Legion, I noticed that exceeding the VRAM budget by playing at 4K (any DLSS setting), Ultra textures (optional texture pack installed) + Ray tracing on medium, causes prolonged freezing in game.

It's very weird when it happens, your character just freezes and nothing happens for ~30 seconds, but the game doesn't crash. Afterwards, gameplay resumes as normal.

I don't mind too much, as I'm sure turning down the texture setting to High will fix the over budget VRAM issue.

There's a tech radar article here that discusses the problem:
https://www.techradar.com/uk/news/n...6gb-vram-runs-watch-dogs-legion-more-smoothly

As a workaround for now, players can reduce the texture resolution setting from Ultra to any other setting, as the difference between low and Ultra is about 4.2GB of VRAM.

Overall though, framerates seem much better on an 8 core CPU and new RAM, can get 50-60FPS on 4K DLSS balanced, solid 60 on DLSS performance.

Guess I'll be upgrading the GPU in 2-3 years if this happens in more than a few games.

I have 3070 8BG card. I think its enough for 1440p, no RTX for 2-3 years
 
Seems mad how anyone is disabling pagefile to increase performance, even with 32gb of ram for example FS2020 can crash without any pagefile.
 
Seems mad how anyone is disabling pagefile to increase performance, even with 32gb of ram for example FS2020 can crash without any pagefile.

Its a tough decision. My original decision to go to 32gig of ram is when FF15 was crashing the entire OS (and that was with a pagefile), the game has been known to use as much as 60 gig of memory in a 128 gig system.
I then originally kept pagefile enabled as it increases virtual memory capacity and on that basis for system stability is a sound decision to make.
But then I discovered in various games I had stuttering due to i/o, after lengthy diagnosis I found even though I wasnt close to using most of my ram (excluding standby cache which is supposed to be treated as free ram, but windows hesitates in this, hence me saying its bad at memory management) it was swapping out game assets to disk with 32GIG!!! of ram. As soon as I disabled swap it fixed the issues.

Currently I do have a pagefile again but its at the smallest size I can get away with, it can be as low as 256 meg (if you set to 16 it bumps up to 256), but I found a 1 gig swapfile is better for stability. Those with 16 gigs of ram or less though should not do this (I have 12 gig commit right now on my desktop), thats not enough ram to shrink the pagefile as much as that, I also have on my RTSS OSD virtual memory utilisation so I can keep an eye on it. hwinfo can monitor it, then from there can add to OSD.

For those who are interested do this.

Open task manager and click memory tab, in there you see a commit level (this is actual usage), you will also see a cached amount next to it this is typically both read and write cache combined.
Now click on open resource monitor at bottom of task manager window. From there click on memory tab.
You should now see a long bar in bottom box, this is great information.
There is - In use, Modified, Standby and Free -
In use and Modified cannot be reallocated, modified is basically data in cache not yet written to disk, so needs to be written to be freed, In use is committed ram for applications to use.
However Standby should be available to the OS to allocate, its basically read cache of data available on disk. The controversy here is when Windows runs out of "free" memory, it prefers to to use swapfile rather than allocate to apps from standby allocation. This is why it can start to use swapfile when ram commit is low.
Note as well the "in use" may be lower than the commit level, a app asks for XX amount of memory, windows allocates it and this is committing the memory, however the app may not use all of it, so the "in use" can be lower. Windows as I understand it unlike linux cannot over commit meaning its much easier to get OOM issues.
When commit reaches virtual memory capacity its game over.
 
Last edited:
No surprise and yet we had lots of threads which got locked to denial.

Usually the old advice: "buy Nvidia cards, the get a better used price when you want to move on" is true enough.

However, what I have found looking at popular auction sites and high-street vendors of used kit, is that in terms of raw compute this is not the case:
3060Ti sell for less than 6700 XT
3070 sell for less than 6800
3080 sell for less than 6800 XT
About the only one where isn't case is the 3090 which still sells for more than 6900 XT.

And as it happens... the 3090 is the only Nvidia card which has more VRAM at a given tier than AMD's equivalent.

I know supply and demand has a bearing and for most of the last few year's boom, AMD cards were harder to get than Nvidia ones.
However, I think VRAM has a lot to do with this too and I think - despite the denial on forumns - that buyers are reluctant to buy YAEGC (Yes-Another-Eight-Gig-Card).
 
Last edited:
No surprise and yet we had lots of threads which got locked to denial.

Usually the old advice: "buy Nvidia cards, the get a better used price when you want to move on" is true enough.

However, what I have found looking at popular auction sites and high-street vendors of used kit, is that in terms of raw compute this is not the case:
3060Ti sell for less than 6700 XT
3070 sell for less than 6800
3080 sell for less than 6800 XT
About the only one where isn't case is the 3090 which still sells for more than 6900 XT.

And as it happens... the 3090 is the only Nvidia card which has more VRAM at a given tier than AMD's equivalent.

I know supply and demand has a bearing and for most of the last few year's boom, AMD cards were harder to get than Nvidia ones.
However, I think VRAM has a lot to do with this too and I think - despite the denial on forumns - that buyers are reluctant to buy YAEGC (Yes-Another-Eight-Gig-Card).
It only applies to the RX6700XT and above.The RTX3060 12GB has more VRAM than an RX6600/RX6600XT/RX6650XT and is a PCI-E 16X card,and definitely when it comes to the entry level AMD cards you have:
1.)Not a huge amount of memory bandwidth
2.)Infinity Cache is actually less than optimal at 32MB(1/3 the amount of the RX6700XT) on the RX6600XT and 16MB on the RX6500XT
3.)PCI-E 8X links

Having used an RX6600XT,RX5600XT and RX5700XT,there are situations where the former just falls down relative to the latter. The RX5700XT had more consistent performance in certain scenarios over the first on a PCI-E 3.0 system.

So in the past AMD,would have offered more VRAM at lower tiers,now they don't. Now look at the RX7600XT,it appears to be only an 8GB card. The RTX4060 and RTX4060TI are also 8GB cards. It will be interesting to see if the whole cutting the PCI-E bandwidth to 8X will continue with this generation or was a one off,and the RX8600XT/RX9600XT/RTX5060/RTX6060 end up massively lagging behind in VRAM,memory bandwidth,Cache,etc. If people don't believe this - the RTX3050 was PCI-E 8X too as it used a 107 series dGPU and it was nearly £300. But the RTX4060 uses a 107 series dGPU too.So nearly £400 then? That means essentially MX class dGPUs under £300. So tiny buses,low VRAM,etc.

With RT using more VRAM,unless these later cards actually increase VRAM to 16GB,and stay stuck at 8GB/12GB,they will run out of steam! Think if the RTX6060 is faster than an RTX3090TI with only 12GB of VRAM,an 8X PCI-E link,a 128 bit bus,low Cache amounts,etc. What will happen is a few years down the line,if we have PCI-E 5.0 8X cards at £400+ and get PCI-E 5.0 4X cards pushed past £200,a lot of people might find issues. So many on this forum,are incapable of thinking ahead(just like when they defended cards like the GTX1060 3GB and 8800GT 256MB) and why these trends happen.OFC,most of the big defenders will just upgrade long before the issues happen.

Cards such as the GTX1060 and RX480 had very long lifespans,as they not only were the last time there was a decent jump in mainstream VRAM on both sides,they had decent enough memory bandwidth and full sized PCI-E buses. They were basically overprovisioned in certain resources.

It's gets even worse when you think of DirectStorage. These laptop dGPUs being sold as "premium" mainstream dGPUs. Now think of the situation where they are trying to page into VRAM,and stream textures off an NVME SSD at the same time as having low VRAM,low memory bandwidth and narrow PCI-E buses.
 
Last edited:
Oh boy, don't start that lot up again. Just pretend its enough to keep the peace. :cry:

Problem is the low VRAM crowd do the most defending,but then do the most upgrading too! :cry:

How I remember the defenders of the "256MB on an 8800GT is fine" but then proceeded to offload them quickly when the next generation came along.
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom