• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

BATMANGATE - Nvidia bows to public pressure over AA

The NVIDIA AA code should not be considered a NVIDIA specific optimisation especially because it's been proven to work as-is on ATI graphics cards?

That doesn't make for a sound logical conclusion.

It's been mentioned that someone at NVIDIA suggested ATI should duplicate the AA code but surely that isn't necessary now that we know the NVIDIA AA code is not a vendor specific optimisation lol? :)

Where is the evidence the AA code isn't vendor specific opptimized? the above conclusion is erroneous.
 
Folks, I think now we have a resolution to the user's problem, is it about time we let this discussion come to a close?

Seems to be a lot of discussion that's just going around in circles.
 
The fact of the matter is as far as the actual code goes... the nvidia specific parts of it make up probably 10-15% - and is the part that is actually locked out - almost all the rest of the code is the standard proscribed method for implementing MSAA with a deferred shader pipeline and is infact running on ATI (and any other) hardware anyway (but you don't see the end result unless using nVidia hardware) - we can only speculate on why - on the one hand it potentially does result in an unnecessary performance hit - malicious intentions? on the other hand is is exactly how a programmer would setup code if they expected to co-exist with the impending implementation of another vendors version of the code...
 
Folks, I think now we have a resolution to the user's problem, is it about time we let this discussion come to a close?

Seems to be a lot of discussion that's just going around in circles.

Not to be an ass but please see post #1 http://forums.overclockers.co.uk/showpost.php?p=15289431&postcount=1 :P


I can't see this thread going anywhere useful - but I'm quite content to carry on defending my posts from the attacks of people who haven't comprehended a thing I've said and just want to "crucify a fanboy".
 
That isn't evidence... it could just as easily be coincidence.

The premise of your argument is flawed so your conclusion isn't watertight.
 
The fact of the matter is as far as the actual code goes... the nvidia specific parts of it make up probably 10-15% - and is the part that is actually locked out - almost all the rest of the code is the standard proscribed method for implementing MSAA with a deferred shader pipeline and is infact running on ATI (and any other) hardware anyway (but you don't see the end result unless using nVidia hardware) - we can only speculate on why - on the one hand it potentially does result in an unnecessary performance hit - malicious intentions? on the other hand is is exactly how a programmer would setup code if they expected to co-exist with the impending implementation of another vendors version of the code...
HUH!? :D

Someone at ATI requested that the vendor lock be removed so surely the bods at ATI know it is safe for the NVIDIA AA code to run on their graphics cards lol? :confused:
 
Again your jumping to a conclusion... maybe they just don't care? "it seems to work - if any problems come to light later we can just blame nvidia - everyone hates nvidia anyhow"
 
I can't see this thread going anywhere useful - but I'm quite content to carry on defending my posts from the attacks of people who haven't comprehended a thing I've said and just want to "crucify a fanboy".

Personally I think that all of the comments here are surjective, of questionable accuracy and just regurgitating what companies have stated. It's resorted to a "who can quote with a direct reference" rather than delivering the actual truth.

Nobody is providing real evidence.
 
Most people are just regurgitating the ATI statements at face value with no understanding of the detail.


I'm not that great at explaining things myself unfortunatly - however everything I have said is based on public statements, fact and readily available information - I have put forward nothing as fact or statement that is based on my own opinion, speculation or conjecture - in the cases where I have speculated on points I've been very clear that I am doing so and not commited to a particular interpretation.
 
Last edited:
and? same difference. You have two companies that make products but you'd expect one to help with coding and the other not to? if nvidia and ati shouldnt be helping dev's with coding for the cards, the microsoft shouldnt be helping dev's with coding for windows. I mean come on, thats what you and tac4u are saying right?

That's not what you said though.

You said MS shouldn't be helping people with Windows and Office software, even though MS develops both.

What would have made more sense would be to say looking to ATi/nVidia to fix issues with windows.

In which case wouldn't be right either.

Devs shouldn't need help from graphics card vendors to enable commonly used and expected features.

The same way you wouldn't and shouldn't expect nVidia for example, to help MS with multi-threading in windows.
 
Devs shouldn't need help from graphics card vendors to enable commonly used and expected features.

While I agree with that statement on the whole due to technical shortsights when it comes to deferred shading you can't implement the commonly used and expected feature (MSAA) in the normal standard method.

DirectX 9 does not have a standard path for resolving multisampling when deferred shading is used and you have to go outside the normal standard routines to implement this feature at a game level.

While most of this code is fairly generic and can be used on different vendors' hardware with minor tweaks if any - for optimal performance some parts of it do require more hardware specific optimization.

While the nvidia optimizations might work fine on ATI hardware - and even provide a performance increase over driver level brute forcing - with specific optimization for ATI hardware it might work even faster again, or produce even better image quality... the reverse could be true as well - ATI specific optimizations might seem to work fine on nVidia hardware but not produce as good image quality or performance as would be possible with a native version.
 
Again your jumping to a conclusion... maybe they just don't care? "it seems to work - if any problems come to light later we can just blame nvidia - everyone hates nvidia anyhow"
HUH?! :eek:

NVIDIA has done the right thing in bowing to public pressure to renounce anti-competitive sponsorship practices and given Eidos a clear mandate to remove the vendor ID detect code that is unfairly preventing many of Eidos’ customers from using in-game AA, as per Mr. Weinand’s comments. I would encourage Mr. Singleton at Eidos to move quickly and decisively to remove NVIDIA’s vendor ID detection.
The above was written by ATI's Richard Huddy. He is the one asking for the vendor detection code to be removed lol!

It looks like this thread started because of the premise that NVIDIA will do what Richard Huddy has entertained when he said the above and I am of the opinion that it should happen because it makes sense lol but will NVIDIA actually do as he has requested?
 
My point was maybe they don't care if it actually works properly or not...
It's like you are suggesting Richard Huddy might be irresponsibly requesting the vendor detection code be removed. :confused:

Do you know more than you are saying? :D
 
Nope I know nothing more on this specific incident than whats been published online.

I do have experience of both nVidia and ATI developer support... which is why ATI leave a sour taste in my mouth - despite the fact they finally have a decent driver platform and solid hardware... and why I probably have a somewhat different perspective to the general public on this matter...
 
While I agree with that statement on the whole due to technical shortsights when it comes to deferred shading you can't implement the commonly used and expected feature (MSAA) in the normal standard method.

DirectX 9 does not have a standard path for resolving multisampling when deferred shading is used and you have to go outside the normal standard routines to implement this feature at a game level.

While most of this code is fairly generic and can be used on different vendors' hardware with minor tweaks if any - for optimal performance some parts of it do require more hardware specific optimization.

While the nvidia optimizations might work fine on ATI hardware - and even provide a performance increase over driver level brute forcing - with specific optimization for ATI hardware it might work even faster again, or produce even better image quality... the reverse could be true as well - ATI specific optimizations might seem to work fine on nVidia hardware but not produce as good image quality or performance as would be possible with a native version.

I find this statement depressing. In several of your posts regarding this now you have heavily suggested that coding AA when using deferred shading seems to be beyond the ability of most programmers within the industry - either through lack of skill or time/resources, you don't seem to have stated which you think it is.

But it feels like you are saying that coding anything outside of "standard" is simply not considered in today's industry.

That's sad :(
 
Sadly theres quite a lot of truth to your comment...

Programming outside the box is discouraged these days - specially developers working for publishers like EA, Eidos, Activision, etc. its all about playing it commercially safe, console compatibility and very little risk or innovation.

At the most basic level the reasoning for the MSAA/deferred shading thing - it may or may not be beyond the capabilities - but they just aren't paid to do it at your average studio - whereas the top level developers who are behind the engines commonly used in video games get to call the shots a lot more.

Although as I said for a properly opptimized lower level implementation you need years of experience with the hardware/driver development of the video cards.
 
Custom multisampling itself even outside the DX spec isn't _that_ hard to implement - for ETQW they had a vendor agnostic version up and running inside a day... when you throw deferred lighting into the mix it does need a lot more care and attention however - take a look at stalker clear sky for how bad it can go wrong heh... dropping fps from 70 to 11 when forced incorrectly.
 
Back
Top Bottom