So we going from 2 gig to 12 gig minimum in 3 years give it a rest.
You were saying.....
XCOM 2 on a single TitanX maxed @2160p.
A GTX 980 Ti or Fury X won't run the game at those settings.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
So we going from 2 gig to 12 gig minimum in 3 years give it a rest.
Kaap, have you tried the game on fury x?
Bear in mind that Kaap's idea of max settings means running 8X MSAA even at 4K res. It has always been the case that we could max out VRAM in games at settings nobody would realistically use on a single GPU.
Even a Titan X at 4K in XCOM 2 is getting ~15 FPS with max settings (8X MSAA).
Bear in mind that Kaap's idea of max settings means running 8X MSAA even at 4K res. It has always been the case that we could max out VRAM in games at settings nobody would realistically use on a single GPU.
Even a Titan X at 4K in XCOM 2 is getting ~15 FPS with max settings (8X MSAA).
Bear in mind that Kaap's idea of max settings means running 8X MSAA even at 4K res. It has always been the case that we could max out VRAM in games at settings nobody would realistically use on a single GPU.
Even a Titan X at 4K in XCOM 2 is getting ~15 FPS with max settings (8X MSAA).
Yes but this is the status of current gen of games, but memory limits are bound to get higher year by year. So his reasoning for vram suggestions for the future ain't that far off. For example VR, you want absolutely minimum latency. So you want as much as data loaded to your video card memory to ensure best possible experience.
According to HardOCP Tomb Raider is using 8 GB VRAM at 4K on a Titan X with no AA and 9.5 GB with 4x AA
VRAM Usage
VRAM utilization was one of the more interesting aspects of Rise of the Tomb Raider. It appears to be very different depending on what GPU you are running. There is no question this game can consume large amounts of VRAM at the right settings. With the GeForce GTX 980 Ti we maxed out its 6GB of VRAM just at 1440p with No AA and maximum settings. The TITAN X revealed that it was actually using up to 7GB of VRAM for that setting. In fact, when we pushed the TITAN X up to 4K with SSAA it used up to almost 10GB of VRAM.
However, for the AMD Radeon R9 390X utilization was a bit odd when you first see it, never exceeding 4.2GB and remaining "flat" with "Very High" textures and SSAA. We did see the proper decrease in VRAM using lower settings, but the behavior was odd indeed. This didn't seem to negatively impact the video card however. The VRAM is simply managed differently with the Radeon R9 300 series.
The AMD Radeon R9 Fury X kind of backs that statement up since it was able to allocate dynamic VRAM for extra VRAM past its 4GB of dedicated VRAM capacity. We saw up to a 4GB utilization of dynamic VRAM. That allowed the Fury X to keep its 4GB of dedicated VRAM maxed out and then use system RAM for extra storage. In our testing, this did not appear to negatively impact performance. At least we didn't notice anything in terms of choppy framerates or "micro-stutter." The Fury X seems to be using the dynamic VRAM as a cache rather than a direct pool of instant VRAM. This would make sense since it did not cause a performance drain and obviously system RAM is a lot slower than local HBM on the Fury X. If you remember a good while ago that AMD was making claims to this effect, but this is the first time we have actually been able to show results in real world gaming. It is awesome to see some actual validation of these statements a year later. This is what AMD said about this in June of 2015.
Note that HBM and GDDR5 memory sized can’t be directly compared. Think of it like comparing an SSD’s capacity to a mechanical hard drive’s capacity. As long as both capacities are sufficient to hold local data sets, much higher performance can be achieved with HBM, and AMD is hand tuning games to ensure that 4GB will not hold back Fiji’s performance. Note that the graphics driver controls memory allocation, so its incorrect to assume that Game X needs Memory Y. Memory compression, buffer allocations, and caching architectures all impact a game’s memory footprint, and we are tuning to ensure 4GB will always be sufficient for 4K gaming. Main point being that HBM can be thought of as a giant embedded cache, and is not directly comparable to GDDR5 sizes.
Now specifically that statement backs up "4K gaming" and we will give AMD (and NVDIA for that matter) a pass at this moment as neither produce "4K gaming" GPUs. The important statement here is this, "AMD is hand tuning games to ensure that 4GB will not hold back Fiji’s performance." We have said over and over again that this statement by AMD did not ring true in terms of needed HBM capacity, and this is the actually the first time we have seen AMD's statement make sense to us in real world gaming with Triple A titles. So kudos to AMD in being able to show us that this statement has come to fruition finally. The downside we see to this statement is AMD will have to "hand tune" games in order to make this work. What games will get hand tuned in the future? That said, AMD seems to have done an excellent job hand tuning Rise of the Tomb Raider for its HBM architecture.
Quite interesting article from pcper about ashes, fcat and which all ends up to talk about microsoft store.
http://www.pcper.com/reviews/Genera...up-Ashes-Singularity-DX12-and-Microsoft-Store
Just had a read of that and indeed interesting. VSync is on and frames are dropped and some interesting points about the Windows store and the same behavior. Will read it agin tomorrow when I haven't just finished a late shift![]()