• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Shader clock not sticking (BIOS overclock)

Associate
Joined
7 Jan 2011
Posts
497
Location
Australia
Hi folks,

I've got a GTS 250 that I have done an overclock in which it has been saved into the BIOS. The problem is that although it is saved, it still appears to revert to a lower setting. Click here to see an illustration of my problem.

Out of the several models offered by Gigabyte I own this one.

I've managed to increase the clocks from the original values:
Core Clock: 675 ^ 770
Shader Clock: 1000 ^ 1150
Memory Clock: 1620 ^ 1850 (This one won't stick despite being safe. It reverts to 1836 in the BIOS with the default value being 1850).

Has anyone had any experience with the Shader clock being locked? Is this a safety or security mode?
 
Anyone?

I've been using the nibitor + nvflash combo. Something interesting as well, GPU-Z has detected my card is 65nm when I believe it's meant to be 55nm. I thought the GTS 250s were based on the 9800 GTX+ which is 55nm, not the 9800GTX (65nm).

It would make sense though because my stock clocks are VERY similar to that of a 9800GTX.

Could it be that with the range offered by Gigabyte (there are 3 cards in the 1GB 250 GTS range) is a mixture of a 9800 GTX for the lower end GTS 250 (the one I own), while the Zalman cooled and OC version is based on the 9800 GTX+?
 
I think I know why I am unable to obtain a higher overclock.

It seems I have been sold a GTS 250 which is actually a rebadge 65nm 9800 GTX, not the 55nm 9800GTX+. Some deeper searching turns up that I may not be the only one which leads me to ask, if you own a GTS 250, can you please post whether your card is using a 55nm fab process or a 65nm.

Techgage are saying GPU-Z is not reading the fab process correctly. I have doubts and here's why. The GPU-Z should have been updated since the article so the version I ran should correctly detect whether my card is 65nm or 55nm.

Furthermore my stock clocks correspond perfectly to that of a 65nm 9800GTX rather than that of a 55nm 9800GTX+ which is what my GTS 250 is supposed to be. I have doubts on whether all GTS 250 graphic cards are 55nm. There are even some ASUS GTS 250 cards which need 2 PCIe connectors as shown here.

I thought one of the requirements for a GTS 250 is that it only needs 1 PCIe power connector, this one clearly requires 2. So much for the smaller sized, power efficient and leaner GTS 250.

I thought my GTS 250 would be a 9800 GTX+ 55nm but it seems it's actually lower and older, it's a 9800 GTX 65nm.
 
Sorry to hear about that, it sucks when you go searching only to come up with what you didn't want to see. I hope others will post their findings to see if most of these cards are the older GTX 9800.
 
It's definitely the 55nm version G92b, perhaps even the low powered version of the G92 chip - a G92-426. The card has one PCI-E connector and probably cheaper voltage control due to the lower watts, hence the shorter PCB design.

GPU-Z reports it wrong - it reports my Palit E-Green as a 65nm chip and I know for a fact it's not - I've seen the core labels and it's the G92-426.

What I found with these cards after extensive testing in GPU limited games and benchmarks, is they are severely memory bandwidth limited and raising the core and shaders results in low FPS increase. You gain the vast majority of performance from overclocking the mem as far as it will go. Test for yourself using something like the Lost Planet 2 free benchmark on max settings (but no AA or AF), or even lower settings as long as you are still GPU limited. Test using OCCT (similar to Furmark), 3dmark etc. Remember these chips were initially designed for 512 and 384bit GDDR3, but that got cut down to 256bit for the 9800 series. With lower speed mem thrown into the mix on some GTS 250's, the core and shaders can actually be lowered by a certain amount (in comparision to the 9800) and it hardly impacts performance. I think that's one of the reasons we get a lot of GTS 250 cards with lower clocks when they have 1800 and 2000Mhz GDDR3.

Also, the shaders run on set ratios - so if you don't put in an exact number, it will default back or forward to the nearest ratio number (eg 1836 when you set 1850). I think they jump in sums of 54.

Lastly, the core and shader clocks need to be within a certain range of one another or they won't work.

As said earlier, these GTS 250 cards are really hampered by the cheaper mem chips however - standard 9800GTX had mem speeds of 2200 to make up for the reduced 256bit interface - it also allows them to make use of faster core and shader clocks - I'd leave the clocks at standard and set the mem as high as you can for the biggest performance increase with low temps and power usage.
 
Last edited:
EDIT: I see you can get your mem to 1150 - that's fantastic! I was limited to 1000 on my GTS 250, but it is an E-Green in my HTPC. I was a little confused, you have the mem and shader clock the wrong way around in your opening post.

With the mem running faster, you may get some performance out of incresing the core and shader - but test as their will be a saturation point. No need to go as high as you can on these clocks and use unnecessary power - it does only have 1 PCI-E connector after all. 725 and 1836 should be fine.
 
Last edited:
Thanks for your contribution on to the thread. I noticed in regards to the voltage my card runs at 1.15 as opposed to the normal 1.2 (ASUS run at 1.3). Deep down I'm happy with the card because I got it for a great price and it's a brand name that has a large warranty behind it. It's just, I wish it could overclock so it's similar to others in the range.

I noticed some of the "Green" versions of the GTS 250 too.

In regards to the set ratio, can you provide more info on that? Reason being is that 1836 seems the max that some cards are allowed to run at. Is this perhaps a security setting. Looking at the original stock clocks and comparing to what I have got, I suppose it's not so bad. I just wanted more.
 
No problem, had a lot of info on the subject as I was optimizing my GTS 250 E-Green a few weeks ago.

I've never heard of the shader being limited via bios, but it does only go up in units of 54Mhz and if you set a value inbetween these fixed units, it will round up or down to the nearest one. So if you want higher than 1836 (which I doubt will give you any performance gain) you'd need to set 1836+54=1890. Try that.

The core clock does limit the range to what the shader will do. Increase the core means you increase the range you can set the shaders I think, but your core is already quite high at 775 so it shouldn't limit your shader.

Ideally, if your card can do 738/1836/1150, I'd be happy with that. It is almost as fast as a 9800GTX+. Pushing too hard might be bad for the board - it won't be designed to run at the same overclock level as 9800GTX+, as they had beefier voltage regulation, larger PCB and two PCI-E sockets. It is possible for this reason the shader core is limited, to prevent damage, but usually you just get crashing if it can't maintain voltages.

Personally, I'd run it 725/1782/1150 as I think the core and shader will be memory limited anyway, but that's just a guess going by what I found.
 
Last edited:
No problem, had a lot of info on the subject as I was optimizing my GTS 250 E-Green a few weeks ago.

I've never heard of the shader being limited via bios, but it does only go up in units of 54Mhz and if you set a value inbetween these fixed units, it will round up or down to the nearest one. So if you want higher than 1836 (which I doubt will give you any performance gain) you'd need to set 1836+54=1890. Try that.

The core clock does limit the range to what the shader will do. Increase the core means you increase the range you can set the shaders I think, but your core is already quite high at 775 so it shouldn't limit your shader.

Ideally, if your card can do 738/1836/1150, I'd be happy with that. It is almost as fast as a 9800GTX+. Pushing too hard might be bad for the board - it won't be designed to run at the same overclock level as 9800GTX+, as they had beefier voltage regulation, larger PCB and two PCI-E sockets. It is possible for this reason the shader core is limited, to prevent damage, but usually you just get crashing if it can't maintain voltages.

Personally, I'd run it 725/1782/1150 as I think the core and shader will be memory limited anyway, but that's just a guess going by what I found.

Firstly I want to commend you on your post.

Link relevant

You know. Normally I'd be happy with a card. Yes I did get it to overclock and the overclock is good, but I want something GREAT. With Gigabyte offering the 3 year warranty as opposed to a generic brands 1 year warranty I'm happy to take the risk.

Looking at my card it has a chunky heatsink and a fairly good fan so I'm confident cooling will be no issue. In regards to the components they look fairly solid too. I'm going to up the voltage from 1.15 to 1.2 and increase the clocks a little and see whether I get any performance gain then report back.

I remember reading a fair bit about the green card (Palit, Gainward or Sparkle?) but I'm not sure why anyone would want it. If you wanted something for a HTPC wouldn't you prefer a fanless solution which used less power GT 220?
 
I can understand your confusion. Go ahead and overclock your card, just be sure your hitting the numbers AND gaining performance, as it's not always clear cut with the G92 as I explained above. You may get different results. :)

I got the E-Green as it was dirt cheap (£50) and I wanted a G92 as it's a legenday chip! Never owned one, got all ATI including a 6850 in my main machine. So I thought I'd put a nVidia in the HTPC and use it to play a few games on the downstairs TV, also gain some CUDA and PhysX support. Plus I hate those puny cards that can barely do 3D!

Another reason I got the Palit E-Green (same as the Gainward Deepgreen) is it's one of the few cards that will fit my HTPC. It was either that, or a single slot 8800/9800GT or 4850. 5770 would have probably fit too, but I have ATI cards already. It's a tiny card, if you've seen pics, and uses the low power version of the G92 (designed originally for top end laptops). My HTPC only has a 380W Antec truepower PSU and this card pulls about the same as a 5770 measured at the wall.

Finally, the card has a very easy to remove plastic shroud - four screws and within seconds you can whip it off leaving the heatsink still attached. That meant I could screw on a 120mm silent fan!!! :D

Oh, and it is one of the few G92 cards that have a HDMI socket plus audio!

It's not much faster than a 9800GT. The extra CUDA cores help in PhysX games though, and CUDA apps. Overall it's a nice small compact card. And now I can say I own one of the last generation G80/92 chips.

You've got a nice card there too. Enjoy it!
 
Last edited:
Okay some news. You are right about the 54MHz shader increase. I have been getting some minor artifacting when I increase the shader clock from 1836 to 1890. I'm not sure if increasing the voltage will prevent that as I was always of the belief that increasing the voltage only helped with the core clock, not the shader clock.

So much like overclocking a CPU all the clocks here must work in harmony. I guess this is why shaders are linked to the core clock, right?

But yeah I echo your thoughts on the G92. This is actually my spare PC but I love the G92 and it's such a great card, even now. It stemmed from it's original card to later cards which became more efficient. A lot of the flack was negative, unfairly so because this was basically a high end 8800 series card at an 8600 price which could run on a 450W PSU.

8800 series are one of the best supported cards and I've never run in to any driver issues. Loved the card. Same with my CPU, it's based off a Wolfdale. Reason I bought these parts is they are based off more premium parts but are dirt cheap and can be overclocked to increase performance.

So as of right now, increasing the voltage from 1.15 to 1.2 and seeing if I can get more stable clocks without artifacting.

EDIT:

Results are in and I've been able to match the higher clocks as seen online (GTX+/B] territory). Temps have shot up but are still safe (maxing out at 80c). An increase of 500+ 3DMarks over my previous result and comparing to the stock card settings 2015 3DMarks. The voltage increase may have helped but the key factor was probably balancing out the clock settings to all compliment each other. Thanks so much for your advice, it's greatly appreciated.

Just how did you know about the 54MHz incremental brackets?
 
Last edited:
Thanks so much for your advice, it's greatly appreciated.

Just how did you know about the 54MHz incremental brackets?

No problem - thanks to you for getting back with your results - so many posters never bother returning to a thread with their conclusions! :)

My card had an app called vtune, which allowed me to play with the clocks in Windows on the fly. Very useful as I could find the optimal settings quite quickly before flasing the bios.

In vtune, I simply noticed it would only jump up in 54Mhz increments on the shader. :D
 
No problem - thanks to you for getting back with your results - so many posters never bother returning to a thread with their conclusions! :)

My card had an app called vtune, which allowed me to play with the clocks in Windows on the fly. Very useful as I could find the optimal settings quite quickly before flasing the bios.

In vtune, I simply noticed it would only jump up in 54Mhz increments on the shader. :D

Nice. never heard of vtune. I used afterburner + furmark for finding what was suitable for me. I might start folding on my machine, I just hope my power bills don't go through the roof.

Heads up if you haven't heard of afterburner, it also lets you do increments on the fly in windows, and seems compatible with ATI and nVidia solutions from all card manufacturers.
 
Back
Top Bottom