GAH! So it's true the dedicated encoding HW is decent?
It's the main reason I want one, now if only the AIB's would get their act together![]()
Mate, it chewed through encoding in no time. I am actually tempted to get a 4GB 480 just for that lol.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
GAH! So it's true the dedicated encoding HW is decent?
It's the main reason I want one, now if only the AIB's would get their act together![]()
Here you go folks.
Doom Vulkan | RX 480 | 1080P - Ultra Settings | The UAC
To compare to a single Radeon Pro Duo click here.
Video is private?
Vulkan is fresh, from the ground up, much easier to use, and far better performance wise. All it needs now is the professional market to move from their old OGL stacks and version and we'll all have a grand time.
Mate, it chewed through encoding in no time. I am actually tempted to get a 4GB 480 just for that lol.
bear in mind 1080s only have about an extra 15% more ram bandwidth.
GRRD5X is double at the "same" frequencies, but the clock speed of it on 1080s is really low compared to 1070s, my overclocked vram on my 1070 is only 3% lower than 1080 spec.
buying this game just for vulkan
£19 here
https://www.gamebillet.com/uk/doom.html
use code GBITADUK20
I've just tried DOOM for the first time and I was blown away how smooth it was using the Vulkan API. This is on a 290X stock clocks. If BF1 uses this I might hold off on purchasing a 1070. Not that I needed it, more that I wanted it as I rarely play anything demanding these days.
I will try OpenGL tomorrow and see what the difference is but this was my first impression so I wanted to share. I wasn't interested in the game but this thread made me part money to see what the fuss was about. Everything ultra at 1080P so I will try OpenGL at 1080p then try both at 1440P.
If the consoles are adopting DX12 with AMD hardware then it may be exciting times ahead but knowing how aggressive Nvidia are, they will have hardware supported A-Sync soon. It's nice to see the time spent on DX12 and Mantle are paying off as AMD usually get shafted. I hope now AMD will be rewarded for their work. Nvidia will only have themselves to blame if they give up ground to AMD so I hope developers will continue to use this API. TWIMTBP titles will too, when Nvidia step up their game.
Please don't slate me for lack of knowledge about all this. I don't spend a lot of time keeping up to date on things over the last year due to work.
That code never worked for me but "summersale" did and the game cost me £19.59.
P.S <3 for AMD Matt![]()
It requires either very high bandwidth OR true parallel async compute.
I have not programmed this sort of thing yet, but from what I've read the following seems quite a reasonable explanation to me:
AMD cards use the copy queue to stream the textures while compute/3d queues are still doing actual graphics work (in parallel). NVidia cards can't copy in parallel and must do their preemption thing to implement the async behavior (i.e. pause transfer to render frames). Therefore it follows that this effect will appear when the memory bandwidth is not high enough to complete the data transfer within the time-slices NVidia allocates to copying.
In short, memory bandwidth for NVidia cards becomes more important, whereas AMD cards can get away with lower bandwidth as long as they start transferring textures in time.
I assume NVidia can fix this on 1070 by changing the driver to prioritize copying a bit (at the cost of some lower FPS during that time). Or the developers can tune this manually to get the 1070 to work perfectly. But it's just too time-consuming to do this for every single card. For example, the 1080 using the faster GDDR5X seems to be fast enough and is totally unaffected. Tuning a path for 1070 would cause the 1080 to take an FPS hit for no reason. So you need 2 different paths as a dev to get both working optimally? And what about the other models?
I don't thing people realise the importance of this. Sure 1-2 secs of worse image quality on the 1070 is not a big deal (seriously, who cares?). However, this is the kind of thing that makes tuning for AMD so much easier.
Multi-engine is hard enough already, even without having to worry about all this. In AMD cards you can just schedule whatever you need and let the ACE figure out the scheduling, not having to worry so much about every single detail. Whereas NVidia needs devs to put in time and tune these operations manually or has to do it themselves by game profiles in their drivers (not sure how well the latter will work but that's what NVidia keeps saying: we will address this stuff in software).
I find it interesting that the basic premise of devs going forward is now: 'worry about getting NVidia to work acceptably as AMD cards will sort themselves out'. For example this DX12 dev guide (yes I know Doom is Vulkan but the multi-engine paradigm is the same among the two) actually goes as far as to spell it out:
* Choose sufficiently large batches of short running shaders.
Long running shaders can complicate scheduling on Nvidias hardware. Ensure that the GPU can remain fully utilized until the end of each batch. Tune this for Nvidias hardware, AMD will adapt just fine.
NVidia is getting into an age where they will have minor quirks like that all over the place and will rely on its mindshare for users to 'tolerate' this while they sort it out in drivers. At the same time developers will have less and less incentive to profile every single NVidia card model and to get it to work acceptably. They will choose a baseline (and I hope it's the 1060 so that things work for most users) and tune for that minimum (lower models will have degraded quality while higher models may take a bit of an FPS hit).
I really want to see how well the 1060 with its 192bit bus will cope with these scenes in Vulkan mode. It may make the problem much more visible which would confirm my suspicions (i.e. they optimised Doom for a 1080 which makes the 1070 suffer and the 1060 suffer even more).
Here you go folks.
Doom Vulkan | RX 480 | 1080P - Ultra Settings | The UAC
To compare to a single Radeon Pro Duo click here.
It requires either very high bandwidth OR true parallel async compute.
I have not programmed this sort of thing yet, but from what I've read the following seems quite a reasonable explanation to me:
AMD cards use the copy queue to stream the textures while compute/3d queues are still doing actual graphics work (in parallel). NVidia cards can't copy in parallel and must do their preemption thing to implement the async behavior (i.e. pause transfer to render frames). Therefore it follows that this effect will appear when the memory bandwidth is not high enough to complete the data transfer within the time-slices NVidia allocates to copying.
In short, memory bandwidth for NVidia cards becomes more important, whereas AMD cards can get away with lower bandwidth as long as they start transferring textures in time.
In Doom.As an nvidia owner i will say vulkan is pointless for nvidia
Pointless? Looks like Nvidia Pascal cards gain frames in Vulkan compared to DX, how is that pointless?
Pointless? Looks like Nvidia Pascal cards gain frames in Vulkan compared to DX, how is that pointless?
Doom is a bit of an oddity in what API is best to run. Depending on the area, I get more frames with OGL and a different area sees Vulkan getting more frames. This is on a 6850K at 4.4, which might help OGL. Looking at benches on the 480 though, Vulkan is clearly the best API to run.
Yea maybe in some areas the gains would be less on a 480 too but overall it is probably always higher than OPenGL. I found on 4k t actually went from quite choppy to very smooth. Its almost like a gpu upgrade.
As an nvidia owner i will say vulkan is pointless for nvidia and great for amd and as i wont be buying a new card for a while yet give me direct x games or opengl for my card.
It may aswell be called Amd vulkan![]()
I just did a quick comparison in Doom and I'm getting quite a nice boost from Vulkan compared to OpenGL. I'm running it with G-Sync and with Vulkan the FPS was pretty much sitting at 144 with the occasional dip into the 130s or 120s, but with OpenGL it was dipping into the 90s.In Doom.
For right now.
I just did a quick comparison in Doom and I'm getting quite a nice boost from Vulkan compared to OpenGL. I'm running it with G-Sync and with Vulkan the FPS was pretty much sitting at 144 with the occasional dip into the 130s or 120s, but with OpenGL it was dipping into the 90s.
I should really test it without any G-Sync / V-Sync and get a proper FPS comparison, but it looks like I'm getting around a 30% perfomance boost in the area I tested. This might be more to do with my 3570k which isn't overclocked, and Vulkan is getting more work done with the limited CPU power. Looking in Afterburner, Vulkan had both the GPU and CPU hovering close to 100%, but with OpenGL both were lower, around 70-80%. That makes me wonder where the bottleneck is with OpenGL.
Edit: also FXAA doesn't seem to work in OpenGL, but does in Vulkan. Although that doesn't seem to affect performance which is why I use it.