• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Vulkan and DX12 on new GPU'S

I have no idea about that game dont play it but that doesnt make sense? does the game have NVIDIA gameworks/hairworks ? disable the nvidia settings if it does and theres no way you should be getting that?

i'll go have a look at that game.

Its not my bench it was someone else but I expected a lot better and would not buy a game to play at 30fps on a £300 graphics card.
People say its better now with dx12 Il have to check that out.
 
From my experience, the game was just a horribly un-optimised game, didn't matter what settings I used, the thing just ran crap.

Haven't played it since the latest patch but it may be a lot better with dx 12 async now, at least for AMD cards anyway :p ;)

The performance was the least of my annoyances though, the constant crashing was extremely frustrating.

Majority didn't seem to be affected by these issues but on steam, reddit and other forums there were quite a few being vocal about the same issues too, even nvidia users.

A lesson to myself, from now on, always buy from steam and if a game runs ****, use the refund policy, not only do I get all my money back but the developers will lose money since steam/valve keep their cut.


And yes, I posted this a while ago about no acknowledgement at all towards AMD about purehair and nvidia even taking a wee dig:

Well I got this game and it is stunning but I found this rather weird.... the start up screen, you got nvidia gameworks logo, no AMD logo and this as a "tip" in the main screen:

NdnAhFO.png

No acknowledgement towards AMD at all......

Dick move by the developers and nvidia but hey money conquers all.
 
Buying games 6 months after release is a good strategy for me. I still haven't even bought gta5 but I think the bugs are ironed out there too.
Its annoying that dx12 patches come out way after you complete the games. Even doom I was like wtf Vulkan patch...iv finished the game now
 
GTA 5 ran superbly on launch day for myself and most people iirc. Only issue was the downloader kept resetting and you would have to hit a "retry" button, which wasn't fun for a 50+GB game especially at 1/2/3am in the morning :p
 
GTA 5 ran superbly on launch day for myself and most people iirc. Only issue was the downloader kept resetting and you would have to hit a "retry" button, which wasn't fun for a 50+GB game especially at 1/2/3am in the morning :p

My main issue with that game are the online loading screens. Horrific stuff; and the game looves powerful CPUs.

Now that's a title that would benefit from the likes of Vulkan and DX12. Maybe GTA 6 :D
 
My main issue with that game are the online loading screens. Horrific stuff; and the game looves powerful CPUs.

Now that's a title that would benefit from the likes of Vulkan and DX12. Maybe GTA 6 :D

All the gta games needed powerful cpus. GTA4 on my duel core E8500 at 4.2ghz ran like garbage with a 4870 1gb which was a really good set up back then.
 
Yup the loading screens are awful, even on a SSD, it is terrible :(

I find gta 5 runs really well on my i5 750 @4GHz, iirc, the only settings I keep off are the advanced graphics and grass (set to high) + no AA and I maintain 60fps 90+% of the time. Pretty damn impressive performance on the whole considering the open world and how good it looks tbh.

I would be very surprised if GTA 6 doesn't have vulkan and/or dx 12.
 
Last edited:
All the gta games needed powerful cpus. GTA4 on my duel core E8500 at 4.2ghz ran like garbage with a 4870 1gb which was a really good set up back then.

I actually never played GTA 4, but looking at some of those new mods for it, I might pick it up to at some stage. Shame the one thing they couldn't improve were the character models.


I remember when the 780ti came out and absolutely mullered the 290x, I couldn't justify the cost of a 780ti at the time so I had the 290x, ie: the poor mans option, How the tables turned.

I never knew the performance gap was so huge at the time the card came out :eek:
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/17

This will break down to being 11% faster than Radeon R9 290X, 9% faster than GTX Titan, and a full 20% faster than the original GTX 780 that it formally replaces.
 
Last edited:
I'd probably find it a little unsettling playing GTA with an ultra-realistic mod. Impressive, though.

I think the most off putting part of the mods is the extreme contrast between the player models versus the rest of the world. If you can get over that you're going to have a fantastically beautiful experience.

 
I never knew the performance gap was so huge at the time the card came out :eek:

Okay not huge but when it released it was done so to firmly put the fastest gpu title back with Nvidia and it did so convincingly. Yes it may only have been 4, 5 or 6 frames but it was consistently faster in 90% of titles making it the must have in the eyes of those of us who can't afford. :)
 
Okay not huge but when it released it was done so to firmly put the fastest gpu title back with Nvidia and it did so convincingly. Yes it may only have been 4, 5 or 6 frames but it was consistently faster in 90% of titles making it the must have in the eyes of those of us who can't afford. :)

Oh certainly, looking at how Kepler is doing now overall the past year it does not look good.

I still remember the outrage when the Witcher 3, and Project Cars came out, and people swore NVIDIA purposely gimped their Titans and 780's because they performed so badly.
 
Its all excuse making by newish posters who have not seen what has happened in the last 4 years - I have a GTX960 which is increasingly getting thrashed by a R9 380. Instead of highlighting it so Nvidia can probably be forced to try and optimise performance,the excuses makers are all suddenly quiet.

Even if you have a soft spot for Nvidia,people need to be highlighting it just like the GTX970 issues were highlighted and people got $30 back from Nvidia in the US.

People said the HD7870 would last longer than a GTX660 and many said they were fibbing as the GTX660 would be fine. The former were right and I saw it with my mates card,since I had a GTX660.

People said HD7950BE/R9 280 the would last longer than a GTX660TI and many said they were fibbing as the GTX660TI would be fine. The former were right.

People said HD7950BE/R9 280 the would last longer than a GTX760 and many said they were fibbing as the GTX760 would be fine. The former were right.

People said R9 285/R9 380 the would last longer than a GTX960 and many said they were fibbing as the GTX960 would be fine. The former were right.

You see a trend but all of a sudden you see these new posters,trying to say its all fibs,the GTX1060 WON'T do the same thing at all.

Why?? Because Nvidia decided to have a change of heart?? They sell enough cards just on brand strength alone,so why should they care??

All these posters quietly can ignore the last few years since they have plausible deniability when people have discussed these things before.

Yup, it is very sad how the nvidia owners are acting , even at guru3d.
 
Oh certainly, looking at how Kepler is doing now overall the past year it does not look good.

I still remember the outrage when the Witcher 3, and Project Cars came out, and people swore NVIDIA purposely gimped their Titans and 780's because they performed so badly.

Not to mention the gtx 580 with oc giving the same performance as gtx 950 oc in game. Not in syn benches. Sometimes even gtx 960 performance got in way for gtx 580 oc. Another thing is, maxwell not gonna have specific hw driver optimizations, only general as the fermi and kepler having now.

It doesn't look good at all to the nv community. Just a bad msg.
 
Some time ago I came up with a diagram showing how the graphics software technologies evolved over last decades – see my blog post“Lower-Level Graphics API - What Does It Mean?”. The new graphics APIs (Direct3D 12, Vulkan, Metal) are not only a clean start, so they abandon all the legacy garbage going back to ‘90s (like glVertex), but they also take graphics programming to a new level. It is a lower level – they are more explicit, closer to the hardware, and better match how modern GPUs work. At least that’s the idea. It means simpler, more efficient, and less error-prone drivers. But they don’t make the game or engine programming simpler. Quite the opposite – more responsibilities are now moved to engine developers (e.g. memory management/allocation). Overall, it is commonly considered a good thing though, because the engine has higher-level knowledge of its use cases (e.g. which textures are critically important and which can be unloaded when GPU memory is full), so it can get better performance by doing it properly. All this is hidden in the engines anyway, so developers making their games don’t notice the difference.

Those of you, who – just like me – deal with those low-level graphics APIs in their everyday work, may wonder if these APIs provide the right level of abstraction. I know it will sound controversial, but sometimes I get a feeling they are at the exactly worst possible level – so low they are difficult to learn and use properly, while so high they still hide some implementation details important for getting a good performance. Let’s take image/texture barriers as an example. They were non-existent in previous APIs. Now we have to do them, which is a major pain point when porting old code to a new API. Do too few of them and you get graphical corruptions on some GPUs and not on the others. Do too many and your performance can be worse than it has been on DX11 or OGL. At the same time, they are an abstract concept that still hides multiple things happening under the hood. You can never be sure which barrier will flush some caches, stall the whole graphics pipeline, or convert your texture between internal compression formats on a specific GPU, unless you use some specialized, vendor-specific profiling tool, like Radeon GPU Profiler (RGP).

It’s the same with memory. In DX11 you could just specify intended resource usage (D3D11_USAGE_IMMUTABLE, D3D11_USAGE_DYNAMIC) and the driver chose preferred place for it. In Vulkan you have to query for memory heaps available on the current GPU and explicitly choose the one you decide best for your resource, based on low-level flags like VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT, VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT etc. AMD exposes 4 memory types and 3 memory heaps. Nvidia has 11 types and 2 heaps. Intel integrated graphics exposes just 1 heap and 2 types, showing the memory is really unified, while AMD APU, also integrated, has same memory model as the discrete card. If you try to match these to what you know about physically existing video RAM and system RAM, it doesn’t make any sense. You could just pick the first DEVICE_LOCAL memory for the fastest GPU access, but even then, you cannot be sure your resource will stay in video RAM. It may be silently migrated to system RAM without your knowledge and consent (e.g. if you go out of memory), which will degrade performance. What is more, there is no way to query for the amount of free GPU memory in Vulkan, unless you do hacks like using DXGI.

Hardware queues are no better. Vulkan claims to give explicit access to the pieces of GPU hardware, so you need to query for queues that are available. For example, Intel exposes only a single graphics queue. AMD lets you create up to 3 additional compute-only queues and 2 transfer queues. Nvidia has 8 compute queues and 1 transfer queue. Do they all really map to silicon that can work in parallel? I doubt it. So how many of them to use to get the best performance? There is no way to tell by just using Vulkan API. AMD promotes doing compute work in parallel with 3D rendering while Nvidia diplomatically advises to be “conscious” with it.

It's the same with presentation modes. You have to enumerate VkPresentModeKHR-s available on the machine and choose the right one, along with number of images in the swapchain. These don't map intuitively to a typical user-facing setting of V-sync = on/off, as they are intended to be low level. Still you have no control and no way to check whether the driver does "blit" or "flip".

One could say the new APIs don’t deliver to their promise of being low level, explicit, and having predictable performance. It is impossible to deliver, unless the API is specific to one GPU, like there is on consoles. A common API over different GPUs is always high level, things happen under the hood, and there are still fast and slow paths. Isn’t all this complexity just for nothing? It may be true that comparing to previous generation APIs, drivers for the new ones need not launch additional threads in the background or perform shader compilation on first draw call, which greatly reduces chances of major hitching. (We will see how long this state will persist as the APIs and drivers evolve.) * Still there is no way to predict or ensure minimum FPS/maximum frame time. We are talking about systems where multiple processes compete for resources. On modern PCs there is even no way to know how many cycles will a single instruction take! Cache memory, branch prediction, out-of-order execution – all of these mechanisms are there in the CPU to speed up average cases, but there can always be cases when it works slowly (e.g. cache miss). It’s the same with graphics. I think we should abandon the false hope of predictable performance as a thing of the past, just like rendering graphics pixel-perfect. We can optimize for the average, but we cannot ensure the minimum. After all, games are “soft real-time systems”.

Based on that, I am thinking if there is a room for a new graphics API or top of DX12 or Vulkan. I don’t mean whole game engine with physical simulation, handling sound, input controllers and all, like Unity or UE4. I mean an API just like DX11 or OGL, on a similar or higher abstraction level (if higher level, maybe the concept of persistent “frame graph” with explicit pass and resource dependencies is the way to go?). I also don’t think it’s enough to just reimplement any of those old APIs. The new one should take advantage of features of the explicit APIs (like parallel command buffer recording), while hiding the difficult parts (e.g. queues, memory types, descriptors, barriers), so it’s easier to use and harder to misuse. (An existing library similar to this concept is V-EZ from AMD.) I think it may still have good performance. The key thing needed for creation of such library is abandoning the assumption that developer must define everything up-front, with nothing allocated, created, or transferred on first use.

See also next post: "How to design API of a library for Vulkan?"

Update 2019-02-12: I want to thank all of you for the amazing feedback I received after publishing this post, especially on Twitter. Many projects have been mentioned that try to provide an API better than Vulkan or DX12 - e.g. Apple Metal, WebGPU, The Forge by Confetti.

* Update 2019-04-16: Microsoft just announced they are adding background shader optimizations to D3D12, so driver can recompile and optimize shaders in the background on its own threads. Congratulations! We are back at D3D11 :P

http://asawicki.info/news_1701_thoughts_on_graphics_apis_and_libraries.html
 
It is what I've been saying all along :| what most developers actually want is something a bit tidier than what DX/OGL had become with the ability to selectively expose just a bit more what is going on beneath the hood when and if they need it. DX12 and Mantle/Vulkan are/were a complete misunderstanding of what most developers actually want to be working with.
 
Back
Top Bottom