• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Doom Vulkan vs Open GL performance

Man of Honour
Joined
13 Oct 2006
Posts
91,128
Vulkan is fresh, from the ground up, much easier to use, and far better performance wise. All it needs now is the professional market to move from their old OGL stacks and version and we'll all have a grand time.

OGL has its eccentricities but it is a higher level abstraction - Vulkan is potentially a lot harder to use - though a large part of the complexity is getting to grips with the memory layout/handling that you don't have to do with OGL or DX11.
 
Associate
Joined
30 May 2016
Posts
620
bear in mind 1080s only have about an extra 15% more ram bandwidth.

GRRD5X is double at the "same" frequencies, but the clock speed of it on 1080s is really low compared to 1070s, my overclocked vram on my 1070 is only 3% lower than 1080 spec.

Yes, stock 1070 can do 8Gbps whereas the 1080 is at 10Gbps. With a little bit of OC the 1070 comes close to stock 1080 (1.15 * 8 = 9.2).

What is your point though?

My point is this: with low level APIs, 8Gbps is different to 10Gbps, especially when you're doing one thing at a time (Nvidia approach). You can get it working smoothly with 10Gbps, but then the 1070 will have the degraded quality at times. You can get it working smoothly for 8Gbps, but then the 1080 *may* under-perform a bit. Either way, you've got one more thing to deal with now as a developer, and for sure, you're not going to test each and every NVidia card. You'll get some baseline card to optimise for, then let Nvidia sort it out with 'drivers' for the rest of its range.

P.S. If you actually have a 1070 and can reproduce (with stock clocks) what TechYesCity was complaining about, I'd be very interested to know whether the problem goes away when you OC your card.
 

J.D

J.D

Soldato
Joined
26 Jul 2006
Posts
5,223
Location
Edinburgh
I've just tried DOOM for the first time and I was blown away how smooth it was using the Vulkan API. This is on a 290X stock clocks. If BF1 uses this I might hold off on purchasing a 1070. Not that I needed it, more that I wanted it as I rarely play anything demanding these days.

I will try OpenGL tomorrow and see what the difference is but this was my first impression so I wanted to share. I wasn't interested in the game but this thread made me part money to see what the fuss was about. Everything ultra at 1080P so I will try OpenGL at 1080p then try both at 1440P.

If the consoles are adopting DX12 with AMD hardware then it may be exciting times ahead but knowing how aggressive Nvidia are, they will have hardware supported A-Sync soon. It's nice to see the time spent on DX12 and Mantle are paying off as AMD usually get shafted. I hope now AMD will be rewarded for their work. Nvidia will only have themselves to blame if they give up ground to AMD so I hope developers will continue to use this API. TWIMTBP titles will too, when Nvidia step up their game :p.

Please don't slate me for lack of knowledge about all this. I don't spend a lot of time keeping up to date on things over the last year due to work.

buying this game just for vulkan :D

£19 here

https://www.gamebillet.com/uk/doom.html

use code GBITADUK20

That code never worked for me but "summersale" did and the game cost me £19.59.

P.S <3 for AMD Matt ;)
 
Caporegime
Joined
12 Jul 2007
Posts
40,543
Location
United Kingdom
I've just tried DOOM for the first time and I was blown away how smooth it was using the Vulkan API. This is on a 290X stock clocks. If BF1 uses this I might hold off on purchasing a 1070. Not that I needed it, more that I wanted it as I rarely play anything demanding these days.

I will try OpenGL tomorrow and see what the difference is but this was my first impression so I wanted to share. I wasn't interested in the game but this thread made me part money to see what the fuss was about. Everything ultra at 1080P so I will try OpenGL at 1080p then try both at 1440P.

If the consoles are adopting DX12 with AMD hardware then it may be exciting times ahead but knowing how aggressive Nvidia are, they will have hardware supported A-Sync soon. It's nice to see the time spent on DX12 and Mantle are paying off as AMD usually get shafted. I hope now AMD will be rewarded for their work. Nvidia will only have themselves to blame if they give up ground to AMD so I hope developers will continue to use this API. TWIMTBP titles will too, when Nvidia step up their game :p.

Please don't slate me for lack of knowledge about all this. I don't spend a lot of time keeping up to date on things over the last year due to work.



That code never worked for me but "summersale" did and the game cost me £19.59.

P.S <3 for AMD Matt ;)

 
Last edited:
Soldato
Joined
25 Sep 2009
Posts
9,627
Location
Billericay, UK
It requires either very high bandwidth OR true parallel async compute.

I have not programmed this sort of thing yet, but from what I've read the following seems quite a reasonable explanation to me:

AMD cards use the copy queue to stream the textures while compute/3d queues are still doing actual graphics work (in parallel). NVidia cards can't copy in parallel and must do their preemption thing to implement the async behavior (i.e. pause transfer to render frames). Therefore it follows that this effect will appear when the memory bandwidth is not high enough to complete the data transfer within the time-slices NVidia allocates to copying.

In short, memory bandwidth for NVidia cards becomes more important, whereas AMD cards can get away with lower bandwidth as long as they start transferring textures in time.

I assume NVidia can fix this on 1070 by changing the driver to prioritize copying a bit (at the cost of some lower FPS during that time). Or the developers can tune this manually to get the 1070 to work perfectly. But it's just too time-consuming to do this for every single card. For example, the 1080 using the faster GDDR5X seems to be fast enough and is totally unaffected. Tuning a path for 1070 would cause the 1080 to take an FPS hit for no reason. So you need 2 different paths as a dev to get both working optimally? And what about the other models?

I don't thing people realise the importance of this. Sure 1-2 secs of worse image quality on the 1070 is not a big deal (seriously, who cares?). However, this is the kind of thing that makes tuning for AMD so much easier.

Multi-engine is hard enough already, even without having to worry about all this. In AMD cards you can just schedule whatever you need and let the ACE figure out the scheduling, not having to worry so much about every single detail. Whereas NVidia needs devs to put in time and tune these operations manually or has to do it themselves by game profiles in their drivers (not sure how well the latter will work but that's what NVidia keeps saying: we will address this stuff in software).

I find it interesting that the basic premise of devs going forward is now: 'worry about getting NVidia to work acceptably as AMD cards will sort themselves out'. For example this DX12 dev guide (yes I know Doom is Vulkan but the multi-engine paradigm is the same among the two) actually goes as far as to spell it out:

* Choose sufficiently large batches of short running shaders.

Long running shaders can complicate scheduling on Nvidias hardware. Ensure that the GPU can remain fully utilized until the end of each batch. Tune this for Nvidias hardware, AMD will adapt just fine.

NVidia is getting into an age where they will have minor quirks like that all over the place and will rely on its mindshare for users to 'tolerate' this while they sort it out in drivers. At the same time developers will have less and less incentive to profile every single NVidia card model and to get it to work acceptably. They will choose a baseline (and I hope it's the 1060 so that things work for most users) and tune for that minimum (lower models will have degraded quality while higher models may take a bit of an FPS hit).

I really want to see how well the 1060 with its 192bit bus will cope with these scenes in Vulkan mode. It may make the problem much more visible which would confirm my suspicions (i.e. they optimised Doom for a 1080 which makes the 1070 suffer and the 1060 suffer even more).

You are way too intelligent for this forum. :o But what you say does make sense even to a enthusiast like myself who only has a passable knowledge of how these things work.
 
Soldato
Joined
19 Dec 2010
Posts
12,030
Here you go folks.

Doom Vulkan | RX 480 | 1080P - Ultra Settings | The UAC

To compare to a single Radeon Pro Duo click here.

Thanks Matt, Are you doing a 1440p one?

I am still debating changing to a 480 or 1060, even though it would be more of a sidegrade from my 290.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,128
It requires either very high bandwidth OR true parallel async compute.

I have not programmed this sort of thing yet, but from what I've read the following seems quite a reasonable explanation to me:

AMD cards use the copy queue to stream the textures while compute/3d queues are still doing actual graphics work (in parallel). NVidia cards can't copy in parallel and must do their preemption thing to implement the async behavior (i.e. pause transfer to render frames). Therefore it follows that this effect will appear when the memory bandwidth is not high enough to complete the data transfer within the time-slices NVidia allocates to copying.

In short, memory bandwidth for NVidia cards becomes more important, whereas AMD cards can get away with lower bandwidth as long as they start transferring textures in time.

IIRC that is only fully true for Kepler - Maxwell can do it (copy + compute/graphics in parallel) but with limitations (much more dependant on drawcall boundaries for despatching new copy tasks so you potentially end up with a lot of ineffective use of the hardware) and Pascal has improved memory atomics amongst other things but from a quick look would appear you have to specifically program for the Pascal feature set to get the full benefits of it.
 
Caporegime
Joined
24 Sep 2008
Posts
38,322
Location
Essex innit!
Pointless? Looks like Nvidia Pascal cards gain frames in Vulkan compared to DX, how is that pointless?

Doom is a bit of an oddity in what API is best to run. Depending on the area, I get more frames with OGL and a different area sees Vulkan getting more frames. This is on a 6850K at 4.4, which might help OGL. Looking at benches on the 480 though, Vulkan is clearly the best API to run.
 
Soldato
Joined
8 Mar 2010
Posts
4,967
Location
Aberdeenshire
Doom is a bit of an oddity in what API is best to run. Depending on the area, I get more frames with OGL and a different area sees Vulkan getting more frames. This is on a 6850K at 4.4, which might help OGL. Looking at benches on the 480 though, Vulkan is clearly the best API to run.

Yea maybe in some areas the gains would be less on a 480 too but overall it is probably always higher than OPenGL. I found on 4k t actually went from quite choppy to very smooth. Its almost like a gpu upgrade.
 
Soldato
Joined
23 Dec 2013
Posts
3,527
Location
North Wales
Vulkan works like a dream for me in Doom, getting ~160FPS in most areas with it turned on (and my daily usage overclock of 1130/1600). It really does hammer my card, gets up to about 83C on the core and 80C on the VRMs. Good old 290X Lighting still staying relatively cool even under full load. Vulkan seems to every last bit of juice out of the card just like Mantle did.

Not going to get involved in the mindless bickering in here, just going to say that for me personally it gave me a really nice boost and smooth as butter.
 
Last edited:
Associate
Joined
1 Jan 2012
Posts
279
As an nvidia owner i will say vulkan is pointless for nvidia and great for amd and as i wont be buying a new card for a while yet give me direct x games or opengl for my card.

It may aswell be called Amd vulkan :D
In Doom.

For right now.
I just did a quick comparison in Doom and I'm getting quite a nice boost from Vulkan compared to OpenGL. I'm running it with G-Sync and with Vulkan the FPS was pretty much sitting at 144 with the occasional dip into the 130s or 120s, but with OpenGL it was dipping into the 90s.

I should really test it without any G-Sync / V-Sync and get a proper FPS comparison, but it looks like I'm getting around a 30% perfomance boost in the area I tested. This might be more to do with my 3570k which isn't overclocked, and Vulkan is getting more work done with the limited CPU power. Looking in Afterburner, Vulkan had both the GPU and CPU hovering close to 100%, but with OpenGL both were lower, around 70-80%. That makes me wonder where the bottleneck is with OpenGL.

Edit: also FXAA doesn't seem to work in OpenGL, but does in Vulkan :confused:. Although that doesn't seem to affect performance which is why I use it.
 
Last edited:
Associate
Joined
30 May 2016
Posts
620
I just did a quick comparison in Doom and I'm getting quite a nice boost from Vulkan compared to OpenGL. I'm running it with G-Sync and with Vulkan the FPS was pretty much sitting at 144 with the occasional dip into the 130s or 120s, but with OpenGL it was dipping into the 90s.

I should really test it without any G-Sync / V-Sync and get a proper FPS comparison, but it looks like I'm getting around a 30% perfomance boost in the area I tested. This might be more to do with my 3570k which isn't overclocked, and Vulkan is getting more work done with the limited CPU power. Looking in Afterburner, Vulkan had both the GPU and CPU hovering close to 100%, but with OpenGL both were lower, around 70-80%. That makes me wonder where the bottleneck is with OpenGL.

Edit: also FXAA doesn't seem to work in OpenGL, but does in Vulkan :confused:. Although that doesn't seem to affect performance which is why I use it.

Presumably though, there's more to come right? Or is this as good as it gets for NVidia/Vulkan?

I'm asking because I've read that Doom's Vulkan is optimised for AMD and they're supposedly still working on squeezing more out of NVidia.
 
Back
Top Bottom