**LETS SEE YOUR PILEDRIVER OVERCLOCKS - LET ME START WITH 5GHz+!!**

OcUK Staff
Joined
17 Oct 2002
Posts
38,729
Location
OcUK HQ
Hi there


Well AMD are marketing the Piledriver as one of their best overclocking CPU's yet, skeptical due to the relatively poor overclocking of Bulldozer and heat issues it had I was keen to try Piledriver.

AMD state 5GHz is possible with a good air cooler, I used their AMD cooler which is a Corsair H70/H80 equivalent with fans in push/pull mode.

Here is my results so far:-

piledriverocuk.jpg




I am rather impressed, I don't think there is much more though to be honest, maybe 5.3GHz at a push, but 5GHz is certainly a possibility which is superb for an 8-core chip.

Be interesting to see if the 6-core and 4-core varients can hit 6GHz+ ?


So lets see your results. :)
 
Is it me or that 17.375s for 1M Pi calc is very long? specially taking in to account it is a 5.25Ghz 8 core chip?????
Isn't SuperPi a bit naff for benching high numbers of cores (or is that just hyperthreading?)?
Still I'd expect to see way more from a 5.25GHz chip!
 
Does such AMD o'c-ing give any real speedup ? :)
CPU OC just for SuperPi is like eating the soup with the fork. Let's do some tests with LynX and see how stable this o-c is.
 
The newer AMD CPUs have depreciated x87 support in the hardware as it is irrelevant nowadays. Even PhysX when run on the CPU does not use it anymore as it is ancient.

Its like running a AES-NI benchmark on a Core i7 920 and comparing it to a FX8150.
 
Last edited:
Eh? For the latest and greatest tech that's pretty bad! I just tried SuperPI on my fairly old i5 750 (1st gen, not Sandy or Ivy) and it beat that by quite a chunk..

I was hoping it would be a good competitor and I could maybe look into AMD for my new easter build! Guess not.. :( Either that or SuperPI is just a bad tool for benchmarking.. (I hope)

2lbggau.png
 
Last edited:
Either that or SuperPI is just a bad tool for benchmarking.. (I hope)

Its just a stupid waste of time. It is only useful for comparing CPUs based on cores with a similar lineage.

Like I said,its like using a AES-NI benchmark to compare general CPU performance.

An FX8150 or a SB Core i5 would obliterate a Core i7 920 with AES-NI stuff even if the CPUs were all at the same clockspeed. Would this be true in games?? No.

This is an article when PhysX used to use x87 code:

http://techreport.com/news/19216/physx-hobbled-on-the-cpu-by-x87-code

Now, David Kanter at RealWorld Technologies has added a new twist to the story by analyzing the execution of several PhysX games using Intel's VTune profiling tool. Kanter discovered that when GPU acceleration is disabled and PhysX calculations are being handled by the CPU, the vast majority of the code being executed uses x87 floating-point math instructions rather than SSE. Here's Kanter's summation of the problem with that fact:

x87 has been deprecated for many years now, with Intel and AMD recommending the much faster SSE instructions for the last 5 years. On modern CPUs, code using SSE instructions can easily run 1.5-2X faster than similar code using x87. By using x87, PhysX diminishes the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Kanter notes that there's no technical reason not to use SSE on the PC—no need for additional mathematical precision, no justifiable requirement for x87 backward compatibility among remotely modern CPUs, no apparent technical barrier whatsoever. In fact, as he points out, Nvidia has PhysX layers that run on game consoles using the PowerPC's AltiVec instructions, which are very similar to SSE. Kanter even expects using SSE would ease development: "In the case of PhysX on the CPU, there are no significant extra costs (and frankly supporting SSE is easier than x87 anyway)."

So even single-threaded PhysX code could be roughly twice as fast as it is with very little extra effort.

Between the lack of multithreading and the predominance of x87 instructions, the PC version of Nvidia's PhysX middleware would seem to be, at best, extremely poorly optimized, and at worst, made slow through willful neglect. Nvidia, of course, is free to engage in such neglect, but there are consequences to be paid for doing so. Here's how Kanter sums it up:

The bottom line is that Nvidia is free to hobble PhysX on the CPU by using single threaded x87 code if they wish. That choice, however, does not benefit developers or consumers though, and casts substantial doubts on the purported performance advantages of running PhysX on a GPU, rather than a CPU.

Indeed. The PhysX logo is intended as a selling point for games taking full advantage of Nvidia hardware, but it now may take on a stronger meaning: intentionally slow on everything else.

Guess what?? Nvidia ditched x87 in the PC version of PhysX at the end of 2010 and is now using SSE.
 
Last edited:
Last time I looked, In most games the difference was about 6fps/10fps vs its equivalent - The power usage is a bit bad but then you do get more cores (Can`t be bad for future games)
 
Any chance you could use a more meaningful and testing something that loads all the cores - prime for example...

Nice overclock! Shame the processor seems.... well.... underwhelming!
 
Last time I looked, In most games the difference was about 6fps/10fps vs its equivalent - The power usage is a bit bad but then you do get more cores (Can`t be bad for future games)
You wouldn't be saying that if you were people with 120Hz monitor and a GTX690 or simliar performance CF/SLI set up...

In Skyrim bench just outside the town of Whiterun during a thunderstorm on 1920 res on Ultra with a GTX690, FX8350 at 4.8GHz does minimum 60fps and average 111fps, but the i5 3570K at 4.8GHz does minimum 111fps and average 165fps on the same card.

Granted the FX would do ok for standard 60Hz monitors for most games bar CPU demanding games like mmos, but the question is...is it really worth shooting ourselves in the foot in the future for the sake of saving £50 (i5 3570K vs FX 8320)? Gaming performance wise, the Intel would last at least a year, if not two over the PD before need upgrading, so getting a AMD build now might not really be saving money in the long run.
 
Last edited:
pretty bad superpi result

Its bad on any AMD processor, its an old yet quick test to run and shows improvements even if the results are poor in the first place.

For example 5GHz is 18.2s and 4.8GHz is just shy of 20s and stock is somewhere around 25-30s, its jut not suitable for AMD chips, but it does reveal if the OC and settings are improving system performance or not. :)

So far I am at 4900MHz prime stable, looks like 5GHz will be a hard one to crack, not gonna give up though!
 
Back
Top Bottom