• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Shader clocks - importance?

Soldato
Joined
10 Apr 2012
Posts
8,982
I've been looking a little into GPU's recently (still confused as hell) and noticed my 6850 doesn't have a shader clock, where as it's rixal the GTX 460 does, but the 6850 is held in higher regards from reviewers? :confused:

I know the 6850 can clock higher and has higher initial memory and core, but surely having an entire 3rd clock (whatever it does) would push it ahead? Especially when that clock starts at 1400Mhz.

Side question: what does a shader clock do? I'm guessing shadows (obvious :p) but is it necessary? I've noticed in quite a few games that my 6850 seems to give a striped/grid-like effect to shadows in some places, is that due to the lack of a shader clock?

Cheers!
 
Someone with more knowledge will jump in here and say I'm totally wrong, but my understanding is that the shading processors are in effect the ones that actually draw the shaded, textured, bump-mapped/normal-mapped pixels. So the faster this can be done, the more, um, non flat gouraud shaded triangles make up each frame. DirectX 9 brought programmable shaders, where the developers of the games could actually do more with the look of surfaces to suit what they wanted. Rather than just bump maps, texture maps, specular maps etc, they could do more and more to add graphical fidelity to surfaces in game.
tldr, it's the part of your graphics card that makes things look sweet, so the faster it can do that, the better.
 
You need to remember both companies uses entirely different architectures from each other, it's like comparing a cat and a dog together. So comparing core speeds between the two is useless.

I suppose you could say though for AMD the shader clock is synced and is the same as the main core clock. I believe the newer 600 series for Nvidia is like this as well now, meaning a lower shader clock than usual but bumped up with extra shaders to negate the performafance loss.
 
Someone with more knowledge will jump in here and say I'm totally wrong, but my understanding is that the shading processors are in effect the ones that actually draw the shaded, textured, bump-mapped/normal-mapped pixels. So the faster this can be done, the more, um, non flat gouraud shaded triangles make up each frame. DirectX 9 brought programmable shaders, where the developers of the games could actually do more with the look of surfaces to suit what they wanted. Rather than just bump maps, texture maps, specular maps etc, they could do more and more to add graphical fidelity to surfaces in game.
tldr, it's the part of your graphics card that makes things look sweet, so the faster it can do that, the better.

So stuff like shadows, water, SSAO, and even wetness from weather etc. are all applied on the surfaces thanks to the shader clock? Good to know. :cool:

You need to remember both companies uses entirely different architectures from each other, it's like comparing a cat and a dog together. So comparing core speeds between the two is useless.

I suppose you could say though for AMD the shader clock is synced and is the same as the main core clock. I believe the newer 600 series for Nvidia is like this as well now, meaning a lower shader clock than usual but bumped up with extra shaders to negate the performafance loss.

So the AMD cards do have shader clocks, they are just inside the core clocks, cheers mate!

I think that about wraps it up. :D
 
Back
Top Bottom