Would there be any kind of noticeable diff when upping FSB

Soldato
Joined
8 Nov 2006
Posts
9,237
My current setup is in sigh, with 333 FSB and using 9 x multi on CPU with everythign at stock volts.

So it all runs happily, and I seem to have gotten cooling working well, but what I am curious about is would I notice a difference in games or video editing if I ran 450 FSB with a 7x multiplier?

Or is it negligible, and might I jsut as well keep it as is?
 
A higher fsb would give greater bandwidth which helps out quad cores. The best way would be to run a benchmark with the two and compare the difference.
 
What would be a good bench? Something that might give a slight idea on real-world performance gains. Oh, and is free :)
Any of the 3D mark programs will do, not sure if the older ones are better for CPU benching. Alternatly if you have FEAR or COH then run the internal benchmark (as it uses in game scenes) a few times each and compare the results. Use a low res and no AA/AF as this will take the pressure off the GFX card and onto the CPU.
 
Last edited:
Actually, yeah, Crysis CPU bench would probably be a good one.

But still would like to know if anyone tried this temselves, adn whether or not they noticed any diff. If its generally not going to do much for system performance, then won't wother playing when I get home, and will jsut leave it as is. After all, it is stable and cool...
 
Actually, yeah, Crysis CPU bench would probably be a good one.

But still would like to know if anyone tried this temselves, adn whether or not they noticed any diff. If its generally not going to do much for system performance, then won't wother playing when I get home, and will jsut leave it as is. After all, it is stable and cool...

Raising the FSB may require raising the NB volts a tad.
With the Crysis bench make sure its at a low res with low graphical settings otherwise it will be GPU limited and wont show any differences to scores. You need the CPU to be doing the work for an accurate comparison.
 
367 x 9 (3303MHz)
RAM = 918

3DMark03 = 41677
3DMark05 = 18998
3DMark06 = 13213
SuperPI (4M) = 1:25.563
ScienceMark = 1846.74



472 x 7 (3297MHz)
RAM = 942

3DMark03 = 41577
3DMark05 = 19062
3DMark06 = 13155
SuperPI (4M) = 1:24.500
SuperPI (8M) = not run
ScienceMark = 1894.23



The answer is therefore: no.


That's a Q6600 on a 680i board, but the P35 gives the same answer. The bottleneck on all current systems is the absolute CPU speed, so faster RAM and faster bus make no difference.


M
 
367 x 9 (3303MHz)
RAM = 918

3DMark03 = 41677
3DMark05 = 18998
3DMark06 = 13213
SuperPI (4M) = 1:25.563
ScienceMark = 1846.74



472 x 7 (3297MHz)
RAM = 942

3DMark03 = 41577
3DMark05 = 19062
3DMark06 = 13155
SuperPI (4M) = 1:24.500
SuperPI (8M) = not run
ScienceMark = 1894.23



The answer is therefore: no.


That's a Q6600 on a 680i board, but the P35 gives the same answer. The bottleneck on all current systems is the absolute CPU speed, so faster RAM and faster bus make no difference.


M

I recon if you run a game benchmark things may be different, Supreme Comd without AA/AF might show a different story.
 
Why? 3DMark2006 with AA and AF off has pretty much the same effect on a system as a modern game. Something like COD4 would actually be even less affected by the bus bandwidth as it's mostly GPU-limited anyway.


People still seem to be wedded to the idea of max bus-speed from the days of things like T-breds where it really did matter. It doesn't any more: CPU speed is king, queen, knave and several other trumps as well. But no-one seems to want to hear this.


M
 
The argument is not about whether you get more bandwidth, but whether system can actually use it: that is, get better overall performance. For that matter, whether any chipset since NF2 performs better sync than non-sync, because all the ones I've tested it makes no difference in either case. If the bus isn't the bottleneck, giving it more bandwidth will make no difference. The same reason that AGPx8 was no faster than x4. Or x2 for that matter.


M
 
There is no point in raising the FSB to leave the Memory locked at lower speed, but like me rolling out a ball of string, faster than you can roll it in.

The fact you get more power at a higher FSB is cause of the fact the Memory is also Faster clocked.

Run Sandra SI and see the Bandwidth tests go up as you clock the FSB/Memory more and more.
 
Why? 3DMark2006 with AA and AF off has pretty much the same effect on a system as a modern game. Something like COD4 would actually be even less affected by the bus bandwidth as it's mostly GPU-limited anyway.
M

3DMark2006 puts a PC through a set of pre-determined steps. I would have thought a game that has to calculate information on the fly would make far greater demands on the non-gfx element of the system.
A system that is benchmark stable may not be application stable for instance.
I have two Barton cored AMD and both are clocked via the FSB rather than the multi but i think when it comes to Quad cores and apps that utalise all 4 cores then more bandwidth is better.
 
There is no point in raising the FSB to leave the Memory locked at lower speed, but like me rolling out a ball of string, faster than you can roll it in.

The fact you get more power at a higher FSB is cause of the fact the Memory is also Faster clocked.

Run Sandra SI and see the Bandwidth tests go up as you clock the FSB/Memory more and more.




This is missing the point: I know bandwidth goes up with speed. My argument is that bandwidth is a very minor factor in overall performance, so increasing it will not show any real-world improvement on the system overall. But people still remember old chipsets and assume that nothing has changed. Most people now except that only CPU speed matters for A64, so why won't people accept it for P46 and other 775-pin chipsets? As soon as I show it makes no difference, people immediately say that it does.


And for the those still wedded to the old NF2 "synchronised is better than unsynchronised", I present exhibit B. Different chipset I'm afraid (680i) but it's all I had:


Ratio….3DMark03..….3DMark05…..3DMark06..….SuperPI…...SciMark
1:1….…..32230….........14780……...10027…......1:56.484……1379.13
5:4……...32327…….…....14755……...10010……...1:55.406.....1380.99
3:2…...…32027……….....14631……..…9926……....2:02.672……1363.36


Ratio……..Arith…………….....….M-M…….....…...Bandwidth
1:1…….22183/15152………129948/71427………5675/5681
5:4…….22163/15147……..132074/71337……..5605/5602
3:2…….22142/15185………132106/71336………5524/5523



(last three tests are Sandra)

Note that the specific test for RAM bandwidth shows a difference, but nothing else changes much. 5:4 is certainly within statistical variation, but I'll concede a small drop for 3:2. I make it about 1% at most, and a lot less for many of the tests. I'll have to see if I have the equivalent for P35, but I don't think I do.


M
 
This is missing the point: I know bandwidth goes up with speed. My argument is that bandwidth is a very minor factor in overall performance, so increasing it will not show any real-world improvement on the system overall. But people still remember old chipsets and assume that nothing has changed. Most people now except that only CPU speed matters for A64, so why won't people accept it for P46 and other 775-pin chipsets? As soon as I show it makes no difference, people immediately say that it does.

And for the those still wedded to the old NF2 "synchronised is better than unsynchronised", I present exhibit B. Different chipset I'm afraid (680i) but it's all I had:

Note that the specific test for RAM bandwidth shows a difference, but nothing else changes much. 5:4 is certainly within statistical variation, but I'll concede a small drop for 3:2. I make it about 1% at most, and a lot less for many of the tests. I'll have to see if I have the equivalent for P35, but I don't think I do.

M

I agree that running on a divider doesnt seem to impact on performance but greater bandwidth would benafit Quad performance in apps that take advantage of 4 cores. I dont have a Quad but am going from what i have read on various sites.

I would have thought that a Quad@4ghz (500x8) would run a non gpu intensive app faster than a Quad@4ghz (333x12) due to more available bandwidth.
 
Back
Top Bottom