• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

hardwareheaven shows BD > 2600k for gaming

Based on this I would tend to believe them. Not that it really matters, It seems that with that game any decent CPU (heck even a Athlon II X4 does almost as well as an i7 920 in that test) will be limited by the performance of the graphics card.

Those results are within the margin of error tbh and are all GPU limited.
Unless of course you think a 2600k is a worse CPU than a Phenom II 980 :p.
 
Those results are within the margin of error tbh and are all GPU limited.
Unless of course you think a 2600k is a worse CPU than a Phenom II 980 :p.

Sorry, I should have clarified the first part of my statement.

I believe that in BF3 the new chip can be the top performer (or among them), but this doesn't matter because the game will still be GPU limited (so the truly better CPUs can't really shine). Looking at those results I agree that there is no significant performance difference between the top-performing CPUs in this particular game, and the better performance of some of the AMD CPUs (which are known to be a lot slower in more representative tests) is most likely an anomaly that can be accounted for by the margin of error in the framerate tests.
 
Last edited:
the thing is, as we can all probably agree gaming is pretty irrelevant to banchmarking any high end CPU because the GPU will be the limitation. so BD didn't/doesn't need to excel in games, if it was good elsewhere, in things like multitasking, encoding etc, it'd still be a good choice.

problem is, it's not great at encoding and no-one does multi-tasking benchies.
 
Looking at those results I agree that there is no significant performance difference between the top-performing CPUs in this particular game, and the better performance of some of the AMD CPUs (which are known to be a lot slower in more representative tests) is most likely an anomaly that can be accounted for by the margin of error in the framerate tests.
And let's not forget the performance of motherboard can easily affect the results with a difference of couple of fps, even if the same CPU and graphic card are used. The Asus P8P67 Deluxe that techspot used is not exactly among the top teir 1155 boards...in fact, if I recall correctly, it is slower than even its little brother the Asus P8P67 Pro...

I loled at hardwareheaven's cherry picked games titles for comparing the FX-8150 to the i7 2600K, and how they deliberately not mentioning the power consumption for their FX8150 overclocked to 5.2GHz on 1.5v.

From hardwareheaven:
"There are two areas where the FX-8150 excells though, those are gaming and overclocking. In the former we saw the processor give us improved framerates over the Intel model. In the latter the ability to exceed 5GHz with ease offers additional value for money. In fact AMD have indicated that they expect most users to exceed 4.8GHz on air cooling."

No mentioning of FX8150 on 4.8GHz's extra 300W of power consumption over the 2600K overclocked to 5GHz?
http://www.bit-tech.net/hardware/cpus/2011/10/12/amd-fx-8150-review/10

...someone must be getting some dough under the table...
 
Last edited:
They tested to the point of being GPU bound, not CPU bound.

The results are as expected, when GPU bound the CPU cannot show what it's capable of so hits a wall.

This is why low resolution 0xAA/AF test are performed, the GPUs tick over and the CPU is given room to flail it's arms around in.

Hardware Heaven's excuse is "But nobody plays at these settings" Well no crap? You bench to make valid conclusions not to see if games are playable. Seeing if games are playable is fine and all but HH state "Excellent CPU for gaming" though and that is false. Based on their testing the only statement they are qualified to make is "Adequate CPU for gaming".

As said in another thread, I doubt the CPUs were even taxed AT ALL in their gaming benchmarks.
 
Last edited:
or HH skewed its results to muster interest and improve its CTR on those rather short pages :p

This would make a lot of sense.

It is certainly getting a heck of a lot of publicity because of their reviews.

Before today, I had not even heard of hardware heaven.
 
hardware heaven reviews are generally pretty good. this is the first time i've seen one which is somewhat questionable. i definitely agree though - it IS somewhat questionable. money may have changed hands...
 
Its not skewing result or faking them. It's just using the WRONG TESTING METHOD FOR A CPU.

Nobody cares what FPS you get at 2910291029 x 129019201921 with 600x AA.

WE ARE NOT BENCHMARKING A GPU

If you bottleneck the GPU you prematurely wall in the CPU.

It's not a case of "Well lol nobody plays at 1024x768" because that is not the point of a CPU test. The point is to see which one can squeeze the most performance out when giving space to run. You cannot give the CPU that space when your system is walled in by a GPU bottleneck. Why is it important to test without a GPU bottleneck? Because you see which CPU has more headroom as GPU power increases.

HH data is not false. It's just misdirection. It's removing the important CPU data which illuminates that extra performance you can get when you have a more powerful GPU. HH data sez "An 8150 will drive your games at these settings identically to an i7" What it should be saying is "An 8150 can drive your game at these settings identically to an i7, but given a more powerful GPU the i7 will break off into a lead"

To quote an analogy posted elsewhere.

Heardware Heavens review is like seeing which car has a faster top speed, a Ford Fiesta or an Aston Martin, but neither can break the speed limit.
 
Last edited:
Uh... most of those gaming results seem to be in GPU limited scenarios and the results are well within margin of error and completely negligible to call it a win for either CPU based on those results is entirely retarded.

The NF200 chip won't have any impact on single card performance here its a switch which would have some implications for crossfire performance if that was used but will be running full speed for single card. (due to the way the NF200 switch works SLI will see most of the benefits as if it was splitting to actual full speed channels while crossfire wouldn't get the same benefits).
 
Really shoddy review that, clearly GPU bound for the games tests and gives someone less savvy the impression that it'll be on a par or better with a 2600K in most games, which clearly isn't the case from every other review that's been done.

Getting slaughtered for it on their forum and quite sad to see them making excuses for it.
 
the issue is, the way the operating system schedules tasks for the processor, apparently there will be fairly large improvements in performance overall when Windows 8 appears as it will 'apparently' handle Bulldozer in a more optimal manner, wouldn't write it off just yet to be honest. fair enough the performance is a bit 'random' shall we say at the moment. think about it like a busy junction, it doesn't operate efficiently if people are doing things they shouldn't be doing, things move best when everything is orderly and most importantly optimised for the junction in question since all junctions are different in some way or another.

the power consumption problems surely have to have something to do with the issues Global Foundries are having with their 32NM process and would expect to see those reduced or ironed out completely in the coming months, which should allow the architecture to reach higher frequencies (like the 30% promised by AMD!) and maybe, just maybe the FX processors might have a place in the grand scheme of things.

however, I will personally be sticking with my X6 for the time being, got it second hand of a mate for a bargain price and it handles everything I do well, even did some tinkering with CPU-NB settings and memory latencies, giving some interesting results! but I will continue to watch Bulldozer, lets not forget it was never intended to be faster than Phenom II core vs. core, but don't think it was meant to be slower in single threaded applications, lets just wait and see, for the sake of competition and the market I for one hope AMD make some revisions and get it right, as soon as possible!

in the grand scheme of things though, it shouldn't hurt AMD massively, when you consider that Zacate is still best performer in its target segment, Llano is holding the line in the lower end market and Opteron processors are still doing fairly well in the server market. the problem is and will be for the foreseeable future is Bulldozer isn't optimised for the Desktop workload, they would have indeed been better bringing Phenom II to 32NM, adding another two cores (probably same die size as Bulldozer!) and bringing the Llano IPC improvements too Phenom III (and before anyone says, it DOES have improvements over the core it was based on, not many but some!), that wouldn't be bad right? eight core Phenom II with better clock speeds and ~10% improvement in IPC for the price of Bulldozer?

finally don't understand why the hell everyone is so happy and gloaty about this result, why is everyone so happy with Intel monopoly over the higher end of the market, AMD competition is GOOD for us! not bad! what the hell?! wonder how happy the Intel fanboys would be if AMD went bust and Intel could charge whatever they want for their products, technological development slows because at the end of the day 'necessity is the mother of all invention...', also ain't taking sides in this, typing this post from my Intel powered laptop with NVIDIA graphics instead of my desktop with AMD processors and AMD graphics! ;)
 
Windows 8 tests already performed, some improvements in some places, none elsewhere.

Windows 8 will not be the saving grace of BD.

Eitherway, AMD had years to get a scheduler patch sorted. Where is it?
 
I do hope win8 helps things. Having said that, why release a processor that you know isn't yet properly supported? should have waited and timed the release more sensibly and implemented some revisions to the existing units as a stop-gap. I don't see this getting better until the speeds are well above 4ghz out of the box, and with the current fab issues I cant see that being any time soon, and I dont know that AMD can either since they are supposedly about to launch a self contained watercooling system for their chips
 
Windows 8 tests already performed, some improvements in some places, none elsewhere.

Windows 8 will not be the saving grace of BD.

Eitherway, AMD had years to get a scheduler patch sorted. Where is it?

pfft it needs a rebuild :p as I said before, Intel tried long pipelines to make high clock speeds possible and it failed so I really cant believe AMD thought that they would make it work....and with a more complex architecture! A prefetch error would stall an entire module.
 
to be honest, they should never have launched with the current issues with the 32NM process, period! they can't just explain those insane power draw figures, they just don't scale properly with frequency, there is obviously a lot of leakage going on somewhere, if the thing was running at the frequencies originally intended it should have been more competitive from the off. would like to predict that as revisions come and go the issue will be fixed and would guess frequencies will reach the level intended, that is the single biggest issue for me.

the fact it is slower in single-threaded applications than K10.5 is no surprise considering the streamlined nature of the architecture, it was always targeted at the multi-threaded environment, hell at the end of the day it is a server orientated design and am sure it will do well there, since server market makes best use of the design, but we can't say that the operating system is helping cause at the end of the day, sometimes modules are active in situations where they don't need to be (because of inefficient scheduling), the whole point was strong in multi-threaded, hold the line in single-threaded (vs. K10.5) and really push the boat out in power saving features, these power saving features can't work properly if modules are active when not needed. not making excuses for them at all, but just trying to cut through the cloud of negativity and see the 'light at the end of the tunnel...' for AMD. it is there if they can fix the fabrication problems and work out a better scheduler.

think the lesson learned here is never, ever under any circumstances rush an immature process/architecture to the market before it is ready to be there. ;)

Edit: do honestly believe given the results, that AMD would have been better of bringing a Phenom III to the market and really make sure Bulldozer is ready before release, at the end of the day its more suited to the server market, where as an eight-core, 32NM K10.5 (complete with Llano improvements) would be better for the desktop market.
 
Last edited:
Back
Top Bottom