• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD ramps up its Gaming Evolved program

Amd finally pulling together it seems, what a massive improvement if you think it was only a year ago we had consistent broken drivers for release day games.

Keep it up amd, giving nvidia very healthy competition :)

Agreed.

The whole "this game is gimped on the other side" thing which goes on, frankly stinks.

What is good is that competition is very high at the mo which should drive standards up which will benefit us end users.

To be fair, PhysX isn't the same - AMD GPU's can't run this whereas direct compute enhancements could quite easily be run on Nvidia cards if they had the capability like the 5** series.
 
but its only the most efficient way for half of the market. similar to physx on amd cards. it can be done with the cpu, but gimps performance.

i dont agree with using compute because it gimps performance on nv hardware. and to balance, i dont agree with physx on ati because its gimped there too. on consideration

It's not really the same situation. DX11 compute is part of the DX11 spec, it's up to nVidia to make sure their GPUs are specced for all areas of it. AMD don't have a say or "own" anything DX11 compute related, it's set by Microsoft.

PhysX is owned by nVidia they control its use and when they "sponsor" games to have it in they make sure bog standard rudimentary physics effects that every other game has, are turned off when you turn PhysX off to exaggerate the effect that it has.

nVidia decided to gimp their own compute performance. The 5 series had compute performance that was pretty good, the 6 series, not so much.
 
but its only the most efficient way for half of the market. similar to physx on amd cards. it can be done with the cpu, but gimps performance.

i dont agree with using compute because it gimps performance on nv hardware. and to balance, i dont agree with physx on ati because its gimped there too. on consideration

It gimps performance on NV at the moment because NV decided to gimp it this round.
And no its not the same as physx as for it to be the same Compute would have to be exclusive to AMD and the Compute functions would have to run on the CPU just like physx and then the gimp would be massive.

Also in regards to physx, NV will always have the advantage in those titles no matter what generation of AMD card.
 
i bet if nv cards were good at compute this gen then amd wouldnt be pushing this area nearly as hard...

Showdown was started before this gens cards where even out and the rumoured NV card was meant to be a compute beast.

Dave Baumann AMD Rep

The direction of Showdown started long before anyone anyone knew Keplar would be relatively weak on the Compute side of things. There is nothing that was "designed" to hobble Keplar with Showdown, at least not from our part, not least because we simply didn't know what Keplar was or where it would be weak when Showdown work was being done.

DirectCompute is already being used by lots of titles and devs will adopt it more and more as new algorithms are developed using it. Likewise, when we initially demo'ed forward plus with Leo it garnered a lot of developer interested and experimentation because it is a rendering technique that has the efficiencies of Defferred Shading without some of the limitations.

If there is anything proprietary then it is proprietary to Codemasters, not to AMD. The Forward+ rendering mechanism is based on industry standard API code and any DX11 complaint GPU can operate it, and the source code of the Leo demo featuring it is available to anyone; as per the previous comment numberous devs have been playing around with it and variations thereof.
http://www.hardwarecanucks.com/foru...0-ti-review-comment-thread-15.html#post655743
 
Last edited:
i bet if nv cards were good at compute this gen then amd wouldnt be pushing this area nearly as hard...

But nVidia CHOSE to gimp their own compute performance.

It's been part of the DX11 spec for YEARS. The 4XX and 5XX series has much better compute performance than Kepler, they were even bigging up compute for the 4 and 5 series.

Do you realise that CUDA is GPU compute essentially? They have also gimped CUDA performance on the 6 series, it's all of their own doing.
 
Last edited:
AMD have only been putting direct compute heavily into their titles since they have seen performance is poor on nvidia 6XX. Its pretty obvious why. Let's not pretend there is any other reason.

I'm no nvidia fan and I think they have questionable morals/direction at the moment, but let's not pretend AMD are saints when it comes to levering an advantage.

AMD - Gimping Evolved
Nvidia - The Way Its Meant to be Gimped
 
Last edited:
AMD have only been putting direct compute heavily into their titles since they have seen performance is poor on nvidia 6XX. Its pretty obvious why. Let's not pretend there is any other reason.

I'm no nvidia fan and I think they have questionable morals/direction at the moment, but let's not pretend AMD are saints when it comes to levering an advantage.

AMD - Gimping Evolved
Nvidia - The Way Its Meant to be Gimped


But its one where NV can claw back unlike Physx.
There is not a single AMD Evolved title that NV could not claw back and there is nothing in AMD Evolved that is over the top like mega tessellated flat surfaces.
AMD has been weaker at tessellation for some time before the 7xxx and they still put it in there Evolved titles.
 
Last edited:
AMD have only been putting direct compute heavily into their titles since they have seen performance is poor on nvidia 6XX. Its pretty obvious why. Let's not pretend there is any other reason.
The thing is AMD don't have that strong arming attitude that nVidia does when it deals with third parties. Everyone's known about compute for ages. PhysX isn't something AMD can do anything about, nVidia's compute performance is all down to them.

I'm no nvidia fan and I think they have questionable morals/direction at the moment, but let's not pretend AMD are saints when it comes to levering an advantage.

Not at the moment, that's nVidia in general.

AMD - Gimping Evolved
Nvidia - The Way Its Meant to be Gimped

Well as above, it's nothing to do with whether AMD or nVidia are doing it, I'd say the same if it was the other way around.

To me, GPU compute performance is something that's been a very long time coming, nVidia were pushing it HEAVILY via CUDA with the 4/5 series, now they're being quiet about it because their cards can't handle it, but it's of their own doing.

It'll lose them customers because what they've done. I'm talking more about high performance computing, where there were a fair amount of 3D rendering applications that used CUDA to run the rendering on GPUs in realtime (for previews) instead of on the CPU, moving forward rendering performance MASSIVELY even with consumer hardware.

Now they've broken that with the 6 series, they're not really gonna get many customers wanting to use 6 series GPUs for that, they can hardly push CUDA the same way when you consider that.
 
The thing is nvidia went the direction they did for a reason. If the 6xx series had good compute performance, they would have run much hotter and probably wouldn't have hit 1100-1200mhz at stock, so gaming performance would have been poor.

When nvidia had the compute advantage, they were pimping compute. Now AMD have the compute advantage, they are pimping compute.

No one can tell me that compute based AA in sleeping dogs is worth the 10 degrees temperature increase. Its just been put on there so that any reviewer who benches sleeping dogs on Max settings shows a clear performance advantage for AMD. I don't blame them, you have to fight fire with fire.
 
The thing is nvidia went the direction they did for a reason. If the 6xx series had good compute performance, they would have run much hotter and probably wouldn't have hit 1100-1200mhz at stock, so gaming performance would have been poor.

When nvidia had the compute advantage, they were pimping compute. Now AMD have the compute advantage, they are pimping compute.

No one can tell me that compute based AA in sleeping dogs is worth the 10 degrees temperature increase. Its just been put on there so that any reviewer who benches sleeping dogs on Max settings shows a clear performance advantage for AMD. I don't blame them, you have to fight fire with fire.

I care more about the performance than the heat increase, i never turn settings down because of heat.

And if you look at my edited post things have been in motion for quite sometime.

Dave Baumann AMD Rep

The direction of Showdown started long before anyone anyone knew Keplar would be relatively weak on the Compute side of things. There is nothing that was "designed" to hobble Keplar with Showdown, at least not from our part, not least because we simply didn't know what Keplar was or where it would be weak when Showdown work was being done.

DirectCompute is already being used by lots of titles and devs will adopt it more and more as new algorithms are developed using it. Likewise, when we initially demo'ed forward plus with Leo it garnered a lot of developer interested and experimentation because it is a rendering technique that has the efficiencies of Defferred Shading without some of the limitations.

If there is anything proprietary then it is proprietary to Codemasters, not to AMD. The Forward+ rendering mechanism is based on industry standard API code and any DX11 complaint GPU can operate it, and the source code of the Leo demo featuring it is available to anyone; as per the previous comment numberous devs have been playing around with it and variations thereof.
http://www.hardwarecanucks.com/foru...0-ti-review-comment-thread-15.html#post655743
 
The thing is nvidia went the direction they did for a reason. If the 6xx series had good compute performance, they would have run much hotter and probably wouldn't have hit 1100-1200mhz at stock, so gaming performance would have been poor.

When nvidia had the compute advantage, they were pimping compute. Now AMD have the compute advantage, they are pimping compute.

No one can tell me that compute based AA in sleeping dogs is worth the 10 degrees temperature increase. Its just been put on there so that any reviewer who benches sleeping dogs on Max settings shows a clear performance advantage for AMD. I don't blame them, you have to fight fire with fire.

Certainly not, I'd much sooner use something like FXAA anyways.
 
Certainly not, I'd much sooner use something like FXAA anyways.

I have started to play BF3 again and before i used to have 4xMSAA + Post AA on low, now i just have 4xMSAA as the blur got on my nerves and as we all know the AA in BF3 misses a lot of stuff but i would rather that than the blur.
 
I have started to play BF3 again and before i used to have 4xMSAA + Post AA on low, now i just have 4xMSAA as the blur got on my nerves and as we all know the AA in BF3 misses a lot of stuff but i would rather that than the blur.

I totally agree. Much prefer 4xMSAA myself, but that's partially due to my TV size and large pixels. I would rather put some clear lense glasses on and spread butter over the lenses. Gives the same effect as BF3 fxaa with no performance hit.
 
Medal of Honor Warfighter™

Tile-based Deferred Shading

Medal of Honor Warfighter™ uses the Frostbite 2 Tile-based Deferred Shading. This technique breaks up the screen into tiles and uses a DX11 compute shader to determine what lights are used in each of the tiles. By using a compute shader to cull the lights that are not used in a tile, lighting calculations can be done much faster, and more lights can be used overall in the scene.

Depth Bounds Test Extension

The depth bounds extension is used in Medal of Honor Warfighter™ when rendering the deferred lighting. It uses the depth bounds test which is part of the GCN architecture. This allows the game to specify a range of acceptable depth values, and anything outside this range is instantly rejected, saving the cost of a full depth compare.

http://blogs.amd.com/play/2012/10/24/moh-warfighter-amd-guide/2/
 
Last edited:
I have started to play BF3 again and before i used to have 4xMSAA + Post AA on low, now i just have 4xMSAA as the blur got on my nerves and as we all know the AA in BF3 misses a lot of stuff but i would rather that than the blur.

Real men disable blurfest fxaa and use amd's morphological AA, not only is there almost no performance hit, it looks better. :cool:
 
Back
Top Bottom