• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Bulldozer Finally!

HI there

In case you guys had not noticed were now taking pre-orders on the Llano's and in our own early testing we've had the 3850 upto 4.7GHz so far, so yes they are indeed very overclockable which for the money makes them a great CPU. :)


Motherboards for these new CPU's seem expensive for what they are...
 
For the price of the apu that is outstanding.

Just wondering, do you think ati apu's will tie in with the gpu in the future to further hike performance?



HI there

In case you guys had not noticed were now taking pre-orders on the Llano's and in our own early testing we've had the 3850 upto 4.7GHz so far, so yes they are indeed very overclockable which for the money makes them a great CPU. :)
 
HI there

In case you guys had not noticed were now taking pre-orders on the Llano's and in our own early testing we've had the 3850 upto 4.7GHz so far, so yes they are indeed very overclockable which for the money makes them a great CPU. :)

Hi

You have a slight typo in the listing! It should read A8-3850 and not A6-3850!
 
From reading around it seems overclocking is going to be very motherboard dependent so do your research before plumping for a particular board.
 
Im looking forward to Bulldozer...

I already have a 4 core cpu so I would think it would be a little bit silly to step up to another 4 core.... (even if the 2500k is a big upgrade from my current cpu)

Thats why an 8 core BD is appealing.

However BD's 8 cores arent just hyperthreading with a fancy name and diagram?

:o
 
Will the BD mobo support PCI-Ex 3?

If it doesnt, then its probably worth my £ waiting for the first PCI-Ex 3 mobo as lets be honest here, for a gamer, thats the single most important part of the mobo.

Eg: I upgrade my mobo/ram and get a BD cpu, paired with my 5870 fantastic.

However, next year when the first PCI Ex3 Gpu's are around my PCI Ex 2 mobo is not looking so pretty any more...
 
Will the BD mobo support PCI-Ex 3?

If it doesnt, then its probably worth my £ waiting for the first PCI-Ex 3 mobo as lets be honest here, for a gamer, thats the single most important part of the mobo.

Eg: I upgrade my mobo/ram and get a BD cpu, paired with my 5870 fantastic.

However, next year when the first PCI Ex3 Gpu's are around my PCI Ex 2 mobo is not looking so pretty any more...

Nope although some manufactures may come out with some boards like msi did with a Z68 board, however I cant see PCI-E 2 bottle necking gpu's for a few years time seeing as the difference between running a 6990 in x4 and x16 is nominal.
 
Nope although some manufactures may come out with some boards like msi did with a Z68 board, however I cant see PCI-E 2 bottle necking gpu's for a few years time seeing as the difference between running a 6990 in x4 and x16 is nominal.

I would like links to show that.
 
This is not strictly true.

Supposedly current PCI-E 2 GPU 'could' see some improvement in a PCI E 3 socket due to a different kind of encoding:

Taken from here:

"PCIe 2.0 delivers 5 GT/s, but employs an 8b/10b encoding scheme which results in a 20 percent ((10-8)/10) overhead on the raw bit rate. PCIe 3.0 removes the requirement for 8b/10b encoding and instead uses a technique called "scrambling" in which "a known binary polynomial is applied to a data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the inverse polynomial"[17] and also uses a 128b/130b ((130-128)/130)encoding scheme, reducing the overhead to approximately 1.5%, as opposed to the 20% overhead of 8b/10b encoding used by PCIe 2.0. PCIe 3.0's 8 GT/s bit rate effectively delivers double PCIe 2.0 bandwidth. "
 
This is not strictly true.

Supposedly current PCI-E 2 GPU 'could' see some improvement in a PCI E 3 socket due to a different kind of encoding:

Taken from here:

"PCIe 2.0 delivers 5 GT/s, but employs an 8b/10b encoding scheme which results in a 20 percent ((10-8)/10) overhead on the raw bit rate. PCIe 3.0 removes the requirement for 8b/10b encoding and instead uses a technique called "scrambling" in which "a known binary polynomial is applied to a data stream in a feedback topology. Because the scrambling polynomial is known, the data can be recovered by running it through a feedback topology using the inverse polynomial"[17] and also uses a 128b/130b ((130-128)/130)encoding scheme, reducing the overhead to approximately 1.5%, as opposed to the 20% overhead of 8b/10b encoding used by PCIe 2.0. PCIe 3.0's 8 GT/s bit rate effectively delivers double PCIe 2.0 bandwidth. "

Current GPU's do not hit any bandwidth limits of PCI-e 2.0 even at 8x. The only need I can see for PCI-e 3.0 is ssd's
 
Fair enough.

Still I think it makes sense to wait for the first PCI EX3 mobo, just for the upgradability of it.

Maybe so. I try to stick with just buying when I need something rather than waiting because we all know what waiting is like with computer components.

Edit: And even at PCI-e 2 x16 it would not surprise me if ram or your cpu became the limiting factor first.
 
Last edited:
I don't know if I can link. They are videos on youtube from LinusTechTips.

Seen it already & that is no way a conclusive testing with a single synthetic BM & seeing as there have been many users even on this forum saying that they notice a difference when going CF with x4 slot being used even with multiple single cards & there has been tests on Toms HW showing 10% drop in performance with x4 for single cards & would be effectively 2x for each GPU on the dual card.
 
Seen it already & that is no way a conclusive testing with a single synthetic BM & seeing as there have been many users even on this forum saying that they notice a difference when going CF with x4 slot being used even with multiple single cards & there has been tests on Toms HW showing 10% drop in performance with x4 for single cards & would be effectively 2x for each GPU on the dual card.

He did a real world bench too and that did show upto a 10% drop at 4x using a 6990 (which I would say is nominal) however x8 was shown to be around the same as x16 which would mean PCI-e 2 @ x16 should not limit the highest end dual gpu cards for a year or two yet and by that point most people will be past first generation bulldozer and those not I am sure will not be buying the most powerful card on the market at that time.
 
He did a real world bench too and that did show upto a 10% drop at 4x using a 6990 (which I would say is nominal) however x8 was shown to be around the same as x16 which would mean PCI-e 2 @ x16 should not limit the highest end dual gpu cards for a year or two yet and by that point most people will be past first generation bulldozer and those not I am sure will not be buying the most powerful card on the market at that time.

10% is not nominal & could be a real issue in specific games & also taking Eyefinity in to consideration..
 
10% is not nominal & could be a real issue in specific games & also taking Eyefinity in to consideration..

I was just trying to tell opethdisciple that he does not need to wait out for PCI-e 3 as PCI-e 2 at x16 has more than enough bandwidth for even the biggest cards out at the moment and in the future. That does not mean to say we will not need PCI-e 3 eventually, just that there is no need to wait for it. Also people with a system that has 4 GPU's and is using eyefinity will probably have a high enough end board and upgrade often enough to where PCI-e bandwidth is a non issue.
 
Back
Top Bottom