• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RIP <=8 GB Vram

Soldato
Joined
6 Feb 2019
Posts
17,594
I found vram consumption between dx11 and dx12...interesting...
I wonder if there is a way to limit vram to 8gb to see if the frame rates is the same in dx12?


Watch the video again - only about 5gb vram is being used on both cards - borderland 3 doesn’t need much vram even a 6gb care is fine.

What you were looking at is system RAM. Why is system ram showing 50% higher usage in DX12 though lol
 
Soldato
Joined
8 Jun 2018
Posts
2,827
Watch the video again - only about 5gb vram is being used on both cards - borderland 3 doesn’t need much vram even a 6gb care is fine.

What you were looking at is system RAM. Why is system ram showing 50% higher usage in DX12 though lol
Ah yes, thank you. That is right (I'll correct my prior post). But yeah it still does beg the question why it's using so much system ram :confused:
 
Soldato
Joined
6 Feb 2019
Posts
17,594
So you're saying the number of posts on one forum provides more accurate statistics than Steam's global survey? lol

I'm just challenging the person who said the Radeon 7 was becoming popular.

Yet on these forums there are only 34 Radeon 7 owners and 90 rtx 2080ti owners. Poor radeon, it's numbers can only go down since it's no longer in production.

As for steam: I can't even see the Radeon 7, can you?

https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam

2080 = 0.83%
2080 ti = 0.48%
2070 = 1.34%
2060 = 1.23%
1660ti = 0.89%

5700/5700xt/radeon 7 = nowhere to be found
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
I'm just challenging the person who said the Radeon 7 was becoming popular.

Yet on these forums there are only 34 Radeon 7 owners and 90 rtx 2080ti owners. Poor radeon, it's numbers can only go down since it's no longer in production.

As for steam: I can't even see the Radeon 7, can you?

https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam

2080 = 0.83%
2080 ti = 0.48%
2070 = 1.34%
2060 = 1.23%
1660ti = 0.89%

5700/5700xt/radeon 7 = nowhere to be found

Radeon VII is a limited edition card, with worldwide sales perhaps not more than 5000 pcs.
 
Man of Honour
Joined
19 Oct 2002
Posts
29,524
Location
Surrey
I'm just challenging the person who said the Radeon 7 was becoming popular.

Yet on these forums there are only 34 Radeon 7 owners and 90 rtx 2080ti owners. Poor radeon, it's numbers can only go down since it's no longer in production.

As for steam: I can't even see the Radeon 7, can you?

https://store.steampowered.com/hwsurvey/Steam-Hardware-Software-Survey-Welcome-to-Steam

2080 = 0.83%
2080 ti = 0.48%
2070 = 1.34%
2060 = 1.23%
1660ti = 0.89%

5700/5700xt/radeon 7 = nowhere to be found
35 owners on the forum. I have one but haven't yet had my name added to the owners list.

Not that one more makes any difference :D
 
Associate
Joined
17 Sep 2018
Posts
1,432
I found vram [system] consumption between dx11 and dx12...interesting...
I wonder if there is a way to limit vram [it] to 8gb to see if the frame rates is the same in dx12?


Amusingly this video is a 4gb card, AMD Fury, that has allocated RAM as VRAM. Both Fury cards perform worse than an 8gb RX 580, but I don't think this is a VRAM issue as a 3gb 1060 beats the Fury too

 
Soldato
Joined
8 Jun 2018
Posts
2,827
Amusingly this video is a 4gb card, AMD Fury, that has allocated RAM as VRAM. Both Fury cards perform worse than an 8gb RX 580, but I don't think this is a VRAM issue as a 3gb 1060 beats the Fury too
That's not the issue here. Borderlands 3 uses the Unreal Engine which has historically been fined tune for Nvidia (Yes, this is finally changing with the release of GOW4 and hopefully B3 as AMD sponsors more UE games using DX12).

What will happen is that by December 2019 AMD will release driver improvements with their annual driver boost update. By then DX12 should be out of beta and fine tuned for this game.

However, (and I do agree) there has been criticism that the game itself isn't as fined tuned as it should be (volumetric fog needs to be disabled). A 2080ti should get of a 1080p results at 1440p @2:42 mark of the video.

Which is actually in the video you provided. Did you not see that commentary? I'm not sure what the issue is but the performance hit shouldn't be that dire for this game IMO.

Furthermore, this is in DX11 not DX12. With AMD GPUs they jump ahead in DX12 (as shown at the near the beginning of the video). Perhaps the performance issues in DX11 are elevated in DX12 (once matured) IMO.




HU stated they didn't use DX12 because it wasn't as smooth as DX11. For who AMD/Nvidia? That part is nebulous. He further states that a Patron member told him that DX12 worked better in DX12 for his 2080ti!! However, as their own graph shows, the 2080ti falls behind in DX12. It's not hard to paint a clear picture of what's going on here...

Edit:
At a guess; if AMD is more involved in PC games that use DX12 I would speculate that they are enforcing Async Compute via Parallelism among other aspects that favor AMD.

I would love to see 1080ti/2070S/2080S playing this game in DX12 using MSI AB OSD. I would like to see the GPU temps/fps/vreg temps/power consumption.
 
Last edited:
Soldato
Joined
8 Jun 2018
Posts
2,827
Are you saying you would choose a 3.4% increase in average fps vs an 8.5% drop in 1% lows?
Ah, here we go...
What I am saying: He further states that a Patron member told him that DX12 worked better in DX12 for his 2080ti!! However, as their own graph shows, the 2080ti falls behind in DX12. It's not hard to paint a clear picture of what's going on here...
You had to keep quoting me there for the whole thing.

Point is I'm not seeing on mass complaints about issues using DX12. And by HU own admission patron(s) echo the same thing. The only difference is that the 2080ti is showing a lower frame rate in DX12.

Therefore, it would be prudent to show DX12 results with a caveat of a possible addendum when DX12 is out of beta for Borderlands 3.
 
Soldato
Joined
22 Nov 2006
Posts
23,382
DX12 should be a lot better. But you need low level programming skills to use it well. Something lacking in today's gaming industry :(

The people with real talent (like John Carmack etc) moved on long ago. It's largely graduates just out of programming 101 who don't know this stuff.
 
Last edited:
Soldato
Joined
8 Jun 2018
Posts
2,827
Are you saying you would choose a 3.4% increase in average fps vs an 8.5% drop in 1% lows?
Well, that didn't take long. I found an article that correlates exactly what I was saying about featuring DX11 in games that use UE4.


As you can see AMD clearly pulls ahead when using DX12 which is why I questioned HU's omission of DX12 results. Furthermore, the reviewer there didn't indicate any suttering issues using it. However did state (in part):

Developing for two APIs, that are so different, definitely complicates things and increases development time. Still, DirectX 12 is the future, and insights gained now will only benefit future titles. While DirectX 11 works very well, the DX12 implementation seems a bit unstable. It starts with extremely long shader loading times during startup when DirectX 12 is enabled. After the intro animations (which are unskippable btw), you'll see Claptrap dance across the screen for several minutes (!) with no indication what's happening and whether the game is crashed or not. I dug a bit deeper and it seems that the game re-compiles all its shaders during that time, even though DirectX 12 has a shader cache feature that should mitigate exactly that problem. On DirectX 11 the game loads through that same phase within a few seconds.
This explains why DX12 portion of the game is still in beta and what fixes should be in the works for an updated release of DX12 in the game.

Our performance testing shows that when the DirectX 11 API is used, NVIDIA cards definitely have an advantage over AMD cards. Especially older AMD cards based on the "Polaris" architecture, such as the RX 570/580/590, lose a lot of performance when not running in DirectX 12. The newer Vega and Navi cards seem less affected...
Driver bug, plain and simple. RTG needs to address this and fix polaris issues in the game.


DirectX 12 on the other hand treats AMD cards better. NVIDIA on the other hand is having difficulties, but the performance loss for then is relatively small, just a few percent. For NVIDIA users I would definitely recommend DirectX 11 as it gives better performance
Par for the course as they say. If you are using an AMD Video Cards and you buy a AMD Sponsored Game that uses Unreal Engine 4 make sure you enable DX12 in order take advantage of the enhancements made to the engine to get the best performance possible. Even if that means waiting until a better version/release of DX12 is made available (in this case).

https://www.techpowerup.com/review/borderlands-3-benchmark-test-performance-analysis/5.html


DX12 should be a lot better. But you need low level programming skills to use it well. Something lacking in today's gaming industry :(

The people with real talent (like John Carmack etc) moved on long ago. It's largely graduates just out of programming 101 who don't know this stuff.

Well, it's not that cut and dry. Yes, you want a game engine to work as close as possible to metal. Which is why consoles are so efficient. However, there is a conundrum between AMD and Nvidia that was talked about but IMO forgotten. And, it is how do we do this in DX12? Well AMD has-had Mantal which is now Vulkan. But there is MS in the mix of this now and they are pushing DX12. But there is a divide as to how DX12 Async Compute to "work" on the GPU.

AMD GPUs work better with more Parallelism. And, also prefers hardware scheduler and other hardware related features. Which is why AMD cards draw more power then Nvidia.

Nvidia GPUs work better with more Concurrency with Pre-Emption using context switching, etc. And, also prefers to use software (IE: CPU) for their scheduler, etc. Which is why (IMO) there are so many "stutter" threads, topics, posts as any given game is more cpu limited. Keep in mind with Turing Nvidia gpus do run better at parallelism then with Kepler, etc.

Both of which are fighting for. IMO AMD is winning because...consoles... In particular next gen gaming but I digress.

Parallelism works best with AMD's GCN. But with RDNA that has changed a bit (does anyone still remember abit...digress again). It's not wholly clear how RDNA will impact Parallelism. But by the looks of Time Spy results still does pretty bad at Concurrency with context switching, etc (IE the way Nvidia wants DX12 to work). Side note: If you take a look at Time Spy and Fire Strike you may wonder why 5700XT does so much better at Fire Strike...

Anyway, RDNA (from what I've gathered) is suppose to help make DX11 games perform better but it's still not the level of how Nvidia does it. But I've not found a lot on RDNA and will admit haven't researched it that much to further comment.

But hopefully one day all these games will be on DX12/Vulkan. Well, it will once next gen consoles are released.
 
Last edited:
Soldato
Joined
22 Nov 2006
Posts
23,382
Surely AMD's way of having things done by the GPU is better than dumping it on the CPU. That's the whole point in discreet hardware afterall :/
 
Soldato
Joined
8 Jun 2018
Posts
2,827
What is the meaning of "Concurrency with context switching"? Any advantages?

Although I'm probably over simplifying how it works it goes a little something like this:

AMD use of Parallelism in Async Compute can complete multiple Graphics and Compute tasks at the same time. It can run several computations in parallel. This is how GCN was designed to work. Problem is not all games work that way and causes inefficiencies in GCN. IE: Games can only use up to 2/3 of GCN as a whole but at time even less. Which is why RDNA was born. You cannot get most gaming engines to consistently feed GCN like that.

Nvidia uses Concurrency with Pre-Emption in Async Compute which can still complete multiple Graphics and Compute tasks but it's not done all at the same time. It's broken up "if you will".

Here is an example:


Top part is how GCN likes things done.
Bottom is how Nvidia likes things done.


Now I must say I did forget something. And it's called Pre-emption. Something Nvidia does very, very well in.


CS = Context Switching...



One of the main reasons why GCN didn't do so well in DX11 was because of how the data was feed in "serial"

This is what made Nvidia way of doing things so well. And, why Nvidia does well in DX11. That's the advantage Nvidia has over Radeon.



Now that's not to say this is how all DX11 games were ran.
 
Last edited:
Permabanned
Joined
2 Sep 2017
Posts
10,490
GCN was inefficient because the software was written to take advantage of the rules that nvidia prefers to set.
GCN-optimised titles run very well on Radeon...
 
Soldato
Joined
8 Jun 2018
Posts
2,827
GCN was inefficient because the software was written to take advantage of the rules that nvidia prefers to set.
GCN-optimised titles run very well on Radeon...

IMHO, I too thought that developers were "bias" but now, not so much. I realized that it was the rules that Microsoft set for DX10, DX11, DX12, for example.

Now there was the Assassin Creed "Gate" where the developer removed DX10.1 at the behest of Nvidia as it made Radeon cards faster in that game. The others were tessellation "Gate" in Crysis (2? and Hawx). As well as the use of physx to replace basic rasterization that was normally used in games (Batman, Borderlands 2, etc). I recall Need For Speed Shift were it was determined that physx calculations were used for tire physic, etc. So, I know there are a few examples.

But for games like GTA V, Witcher 3, Watch Dogs, Saints Row, etc there is no real excuse why AMD hasn't improved on those games by now to show it betting nvidia.

Edit made to get to the point.
 
Last edited:
Back
Top Bottom