• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Publishes DirectX 12 Tips for Developers

Yeah if you like...PS3 also sold 10million a year... PC discrete GPU still sold 50-60million a year in the same time period. Completely irrelevant to a discussion regarding DX12 optimisation though as those consoles don't and a big chunk of the PC sales from the same time period won't support it.
Not irrelevant at all, it won't be optimised for those particular consoles since they don't support DX12 but it's not going to stop them sales contributing to them deciding which system to start programming from and from there how they'll start optimization. So if there's enough console sales they might start there and from that point optimize the dx12 performance, dx12 isn't the ONLY consideration devs have when it comes to platform choices.

And lets be honest, a lot of computers come with gpu's but aren't necessarily used for gaming or are just thrown in but too weak to really make use of it. Can't imagine all the 950's are really going to be pulling much DX12 grunt for example. We'll see how it goes but a lot of devs start with consoles first in my opinion and as I pointed out, some of those sales you was talking about are from AMD who are using GCN anyway.
 
Your assuming that devs do all their optimisation on one platform and then only do minor tweaks on other systems. MGS5 is a great example of "lowest common denominator" gaming. It doesn't really need optimising on NVIDIA hardware as its not very demanding anyway. Either that or it actually is just very well optimised on NVIDIA hardware.

Either way its not a good example in support of your argument.
 
Your assuming that devs do all their optimisation on one platform and then only do minor tweaks on other systems. MGS5 is a great example of "lowest common denominator" gaming. It doesn't really need optimising on NVIDIA hardware as its not very demanding anyway. Either that or it actually is just very well optimised on NVIDIA hardware.

Either way its not a good example in support of your argument.
Actually quite the opposite so you might have misunderstood that, I was more pointing out that they'd go for what is seen as the easier way to support the majority of systems (i.e. why I've been commenting consoles have shared gcn architecture with AMD gpu's for the past few posts) so that is more assuming they look for the most compatible architecture rather than just a system.

Devs will always just do whatever is easiest to be honest, doesn't make sense to start with Nvidia and then have to try and work backwards for 4 or 5 other systems when they can have a more solid foundation on those 4 or 5 systems and then simply tweak a little just on Nvidia side. I agree they'll try and work on as many systems as possible but regardless of how well it runs on Nvidia that isn't not really a bad example, if you are stating it's a great example of lowest common denominator gaming then that is clearly consoles so that suggests they indeed would have to have had the consoles in mind from the get go making your comments make no sense. It's a bad example because they clearly focused on the systems I said they probably would want to focus on? I never said all devs would have poor nvidia optimisation (as this can be done before release anyway) so that doesn't invalidate anything either. Plenty of games get ported to PC afterwards and aren't trash on Nvidia hardware.
 
Last edited:
I didnt reply to you, I replied to bennyboy who was saying that PC gamers were a minority and that devs will optimise for GCN over nvidia.
I'm not quite sure what rabit hole you've delved off in to, but I was replying to an inference that NVIDIA optimisation will suffer as a result of consoles being GCN in a thread about DX12 optimisation.

If devs are going to just do lowest common denominator then it makes all talk of DX12 optimisation irrelevant as even DX11 can best what the consoles can do with one arm behind its back. If a game needs DX12 on PC then its got no hope of running on a console in the first place. The whole point of DX12 on PC is to finally allow PC's to realise their full potential over consoles instead of just being a slightly graphically polished version of the console version. MGS5 isn't a DX12 game.

Even within GCN, a FuryX is so far removed from the hardware in an Xbox that optimisation for one should be drastically different for each.
 
Last edited:
Hahaha, GTX 970 :P

Funny as this is a lot of the optimisation that would have been done on Fiji microcode lol.

It's not good practice to saturate the frame buffer anyway. This is how you encounter swap problems. Efficient buffer management is something developers have wanted more control over, so it makes perfect sense for this to be reiterated
 
For those going on about Amd not writing there own guide because they are to lazy what do you think all the developers that signed up to Mantle were getting. There might not have been tons of games released but there was hundreds of developers signed up. I am sure getting some hands on experience on coding for GCN at low level on mantle was much more intuitive than reading Nvidia's guide. Money wise and effort wise I am pretty sure Mantle went far and beyond.

People forget so easily these days if it does not suit there point of view.
 
For those going on about Amd not writing there own guide because they are to lazy what do you think all the developers that signed up to Mantle were getting. There might not have been tons of games released but there was hundreds of developers signed up. I am sure getting some hands on experience on coding for GCN at low level on mantle was much more intuitive than reading Nvidia's guide. Money wise and effort wise I am pretty sure Mantle went far and beyond.

People forget so easily these days if it does not suit there point of view.

You know what would be really useful, like, some kind manual with maybe like some example code, in a kind of kit, we could call it a software development kit, and we could release it to the public, we could call it a public SDK...

:D
 
But this begs the question, if the AMD cards are so good and priced so competitively why are Nvidia out selling AMD at 8:2?

Who said these things? AMD's Fiji cards are overpriced and not performing as hoped, The 390 is a good card but it's currently being screwed by some sort of driver bug. It sounds like the 380x will be a good one as the 4gb cut down version in the 380 is a good chip but it's not here yet. But as it stands No, there cards are not "so good". But like we saw with Hawaii the hope is it's a slow burner thing and they'll gradually improve and fix the issues providing a solid range of cards that continue to get get better performance as they mature.

It's like when I got my 290x, At thta time it was considerably slower than the 780ti and I wished I could afford a Ti but I couldn't so I went for the slower 290x. Today the size of that gap has diminished to virtually nothing making them both comparable options performance wise with the 290x generally the preferred option thanks to the ram and better DX12 compatibility.
 
It's like when I got my 290x, At thta time it was considerably slower than the 780ti and I wished I could afford a Ti but I couldn't so I went for the slower 290x. Today the size of that gap has diminished to virtually nothing making them both comparable options performance wise with the 290x generally the preferred option thanks to the ram and better DX12 compatibility.

Hopefully the same thing will happen to Fiji, AMD do support their older cards quite well :cool:
 
I'm guessing theres some flaw with their cards, so they are subtly trying to avoid developers running in to it when coding for dx12 ;)

I'm very tempted to go red for my next card. The whole thing with nvidia downgrading kepler performance earlier in the year, then claiming it was a "bug" when everyone went crazy has put me right off them.
 
Last edited:
Last edited:
I didnt reply to you, I replied to bennyboy who was saying that PC gamers were a minority and that devs will optimise for GCN over nvidia.
I'm not quite sure what rabit hole you've delved off in to, but I was replying to an inference that NVIDIA optimisation will suffer as a result of consoles being GCN in a thread about DX12 optimisation.

If devs are going to just do lowest common denominator then it makes all talk of DX12 optimisation irrelevant as even DX11 can best what the consoles can do with one arm behind its back. If a game needs DX12 on PC then its got no hope of running on a console in the first place. The whole point of DX12 on PC is to finally allow PC's to realise their full potential over consoles instead of just being a slightly graphically polished version of the console version. MGS5 isn't a DX12 game.

Even within GCN, a FuryX is so far removed from the hardware in an Xbox that optimisation for one should be drastically different for each.
Ah sorry then, without a direct quote I mistook it.
 
Well you know Nvidia do a API guide and they are evil and trying to screw AMD and everyone else but AMD do one and no one gives a "poo", no one should give a poo about ether. Im sure back in the 90's when there where many more APIs they had there own guides too (it was abit more of a cluster"intercourse" back them with the number of them).
We already have GameWorks using excessive amount of tessellation of dx11 than how dx11's tessellation feature was intended to be properly used, you can't really blame people for worrying about foulplay with Nvidia "teaching" developers how to implement dx12 in a specific way, rather than for developers to implement EVERYTHING that dx12 can offer.

I mean can you imagine how silly would it be if say for example a PSU manufacturer claim their PSU is rate at 80Plus, and when they try to get it certified, they ask the certifying bodies to run their tests in a specific manner, and providing them with a guide with instructions on what to do?

I mean yes it made sense for AMD to do something like that for Mantle, as it is their API, but dx12 is not Nvidia's API. I think developers should just use dx12 as the way they want it to be used, and if graphic cards cannot deliver, then the GPU manufacturers should play catch up, rather than asking developers to slow down and wait for them.

The way I see it, Nvidia made promises of their card can support dx12, but they now realise they are falling short on what they could offer in terms of scale of their what can be supported, so they are now just trying to get the developers to make their use of dx12 work in a specific way to try to hide that to avoid negative publicity.
 
Last edited:
We already have GameWorks using excessive amount of tessellation of dx11 than how dx11's tessellation feature was intended to be properly used, you can't really blame people for worrying about foulplay with Nvidia "teaching" developers how to implement dx12 in a specific way, rather than for developers to implement EVERYTHING that dx12 can offer.

I mean can you imagine how silly would it be if say for example a PSU manufacturer claim their PSU is rate at 80Plus, and when they try to get it certified, they ask the certifying bodies to run their tests in a specific manner, and providing them with a guide with instructions on what to do?

I mean yes it made sense for AMD to do something like that for Mantle, as it is their API, but dx12 is not Nvidia's API. I think developers should just use dx12 as the way they want it to be used, and if graphic cards cannot deliver, then the GPU manufacturers should play catch up, rather than asking developers to slow down and wait for them.

The way I see it, Nvidia made promises of their card can support dx12, but they now realise they are falling short on what they could offer, so they can just trying to get the developers to make their use of dx12 work in a specific way to try to hide that to avoid negative publicity.

both GPU vendors create guides and code examples on how to code for their hardware, they always have, how is that new?
surely you want devs to use both
 
Back
Top Bottom