• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

i7 860 vs. i5 750 -- Pci-e Battle

Associate
Joined
30 Aug 2007
Posts
261
Hi,


I was educated by the folks in General Hardware that the i5 won't allow 2 pci-e cards at 16x/16x, they are lowered to 8x/8x due to the i5 on-chip controller.

Just wanting to confirm that the i7 860 (also on Socket 1156) is not limited in the same way?



Thanks.
 
Soldato
Joined
24 Sep 2008
Posts
10,428
Location
Edinburgh.
They're on the same motherboard chipset so I'd imagine it's still the same. It's all motherboard dependant I think.
 
Associate
Joined
1 Sep 2005
Posts
1,588
Location
Bath, UK
It's the same. The on-die PCIe controller in all Lynnfield CPUs provides 16 PCIe lanes. That may not be a bad thing, depending on the cards you're pairing there might not be any discernible difference anyway.
 

RJC

RJC

Don
Joined
29 May 2005
Posts
28,933
Location
Kent
If you want dual 16x then you will need to look at the x58 range, but performace difference is minimal looking at some benchmarks in the past.
 
Soldato
Joined
11 Sep 2003
Posts
14,703
Location
London
Does anyone know if this is a P55 chipset limitation or just due to the current on-chip PCI-E2.0 controller being x8/x8 in Crossfire?

i.e in theory could a new LGA1156 chip feature x16/x16 in Crossfire? if so could the P55 support this? :confused:
 
Associate
Joined
17 May 2004
Posts
1,585
Does anyone know if this is a P55 chipset limitation or just due to the current on-chip PCI-E2.0 controller being x8/x8 in Crossfire?

i.e in theory could a new LGA1156 chip feature x16/x16 in Crossfire? if so could the P55 support this? :confused:

Surely your question answers that, you have already said that the CPU contains the PCI-E controller so why do you think that if the PCI-E lanes go direct to the CPU that would be a P55 chipset issue.

What does the P55 have to do with the PCI-Ex16 slots on the motherboard? NOTHING!

Current boards wouldn't support it as not enough circuits on the boards. I would also guess that probably not enough connections on the socket to add more lanes.

Also how would they differentiate between 1156 boards and 1336 boards if made both x16/x16. Could actually lead to the mainstream outperforming the extreme platform in xfire or sli as direct connection to CPU rather then going via chipset.
 
Associate
OP
Joined
30 Aug 2007
Posts
261
Doh :( Thanks for the info, thought I'd be able to get around it by using i7's :(

[edit - deleted my 2nd question : Its answered in the post above]



Thanks.
 
Last edited:
Soldato
Joined
11 Sep 2003
Posts
14,703
Location
London
Surely your question answers that, you have already said that the CPU contains the PCI-E controller so why do you think that if the PCI-E lanes go direct to the CPU that would be a P55 chipset issue.

What does the P55 have to do with the PCI-Ex16 slots on the motherboard? NOTHING!
Any particular reason your reply reeks of bad attitude? :confused:

You clearly don't understand the P55 micro-architecture is thats the best answer you can come up with. How do you think the Data travels from the CPU to the graphics card and system memory?

Do you know anything about how the DMI link works? if so could it provide enough bandwidth to supply the PCI-E 2.0 controller at 16x/16x

Also how would they differentiate between 1156 boards and 1336 boards if made both x16/x16. Could actually lead to the mainstream outperforming the extreme platform in xfire or sli as direct connection to CPU rather then going via chipset.
Would you please read peoples posts before jumping in with irrelevant answers

I wasn't asking your opinion on Intels Marketing strategy, I asked *In Theory* is this possible?

Don't bother replying unless your prepared to make an effort constructing a friendly post?
 
Associate
Joined
17 May 2004
Posts
1,585
It certainly wasn't my intention to offend or upset anyone, so I sincerely apologize if any was caused. I am just a very lazy typist.

Hopefully this response is more friendly and more informative answer as to my understanding and reasoning, however it is a lot more typing then my initial response.

I read your question to be could the P55 chipset theoretically support an 1156 socket processor that had 2 x PCI-E x16 slots.

This is my reasoning.

As the PCI-E slots go directly to the PCI-E controller in the CPU in all the diagrams of the P55 chipset that i have seen, then the interface from the CPU to the P55 chipset would be unaffected then I see no reason for it not to be supported.

All that should need to do is improve the PCI-E controller on the CPU to have more lanes connected, and have more PCI-E lanes routed on the motherboard.

We are not talking about changing the DMI link here, just the PCI-E controller which you said in the question in onchip, and daegan pointed out in his answer as well that the PCI-E controller is ondie in the CPU.

http://techreport.com/articles.x/17513

Is a review that I read of the P55 chipset which shows the memory controller and PCI-E 2.0 controller in the CPU, in that the PCI-E 2.0 and RAM slots goto the CPU and not the P55 chipset.

The diagram of the chipset layout is the same one as on the Intel site, so I have no reason to doubt the diagram in the article.

From the diagram then unless I am missing something then data transfer from the CPU to system memory and GPU doesn't seem to route via the P55 chipset at all, as the PCI-E lanes seem to go direct to a PCI-E controller in the CPU and the memory controller is located in the CPU as well.

The article I linked to does raise the question of wether the 2Gb/s DMI link would be enough, however as the x58 and ICH10 have the same DMI link speed then if the data transfer from the ICH10 is enough for 2 x PCI-E x16 slot system then I see no reason why the 2Gb DMI link on the P55 would not be able to support such a PCI-E configuration.

All that the x58 IOH does is provide the PCI-E links for the x16 slots with a QPI link to the CPU at 25.6GBs however, from the X58 to the ICH10 is a 2GBs DMI link, the same as the 1156 CPU to the P55 chipset.

Whilst the ICH10 has the following

12 USB ports vs 14 for the P55.
6 PCI-E x1 slots vs 8 for the P55

Audio is the same, SATA ports are the same, both have Gigabit LAN but admittedly different connections into the chip.

If the ICH10 can provide enough data from storage, network and audio over a 2GBs DMI link then I cannot see how the P55 cannot provide enough over the same DMI link to support a system with 2 x PCI-E 2.0 x16 slots.

The DMI link not having to provide bandwidth for CPU to GPU or Memory, from what I can see on the diagrams.

However if I am mistaken in my understanding of the memory and gpu data transfer paths to the CPU then I am more then happy to be corrected, however I cannot see why Intel would route traffic down to the P55 chipset and back again from the ondie PCI-E controller or from the ondie memory controller to the other parts of the CPU.


I merely added the second part as to why I can't see Intel releasing such a chip, and was more for the original thread starter who initially believed that the i7 on 1156 may support 2 x PCI-E x16 slots at full x16, pointing out why such a chip would be unlikely, so I did see this as a valid point in the overall thread.

Again my apology to anyone that I unintentionally may have offended or upset with my first response.
 
Last edited:
Soldato
Joined
11 Sep 2003
Posts
14,703
Location
London
if I am mistaken in my understanding of the memory and gpu data transfer paths to the CPU then I am more then happy to be corrected, however I cannot see why Intel would route traffic down to the P55 chipset and back again from the ondie PCI-E controller or from the ondie memory controller to the other parts of the CPU
Thanks very much for taking the time to make a helpful and constructive post, certainly pulled the thread back on track and I think your quite correct in the points you make and indeed it does look like P55 DMI has no part in the data transfer from graphics card to on-chip PCI-E controller, my mistake but now I am wiser thanks to your refresh mentioned above! :)

I did read some technical documents that mentioned there are further DMI links inside the processor that connect the CPU to the PCI-E controller (and pehaps the memory too?), these Onchip-Internal DMI links are the ones I was confusing with the P55 PCH >> LGA1156 chip DMI link . . .

It's almost impossible for any single individual to master every detail of every platform but I would like to think that between all of us on OcUK forums we got most angles covered! :cool:

I merely added the second part as to why I can't see Intel releasing such a chip
That's fair enough! . . . I just wanted to know if in theory it was technically possible because that's how my brain works. Of course Intel don't want to detract from their high-end X58 chipset by allowing humble mainstream users who are paying £100-£180 for a newer P55 motherboard to enjoy the slight benefits of full PCI-E 2.0 x16/x16 Crossfire but I have a sneaky feeling that perhaps if the future LGA1366 chips are all hugely expensive then perhaps, just perhaps Intel may decide to *gift* their newer LGA1156 chips with a better on-chip PCI-E controller . .

The nub of my question comes down to this, did Intel Cripple the Lynnfield chip with a substandard onchip PCI-E controller or is it just that way because technically it would have been very hard to implement? . . . I don't know because I am not a silicon engineer hence my question! :D

Now what I want to know is what is the name for the pathway that carries the data back & forth from the CPU to the graphics card and the same for the memory data? it doesn't fly through the air does it so what is the correct name or term for these physical connections? . . . and once I found out the name do these physical connections have certain restrictions that would make full PCI-E x16/x16 crossfire impossible? :confused:
 
Associate
Joined
17 May 2004
Posts
1,585
Only Intel will really know the answer to that, however I would hazard an educated guess at the following, and say that is purely marketing and product differentiation, rather then technical.

This is why.

Intel have always distinguished in this way between the mainstream P and the enthusiasts X chipsets, however. It was this way with the P45 and X48 PCI-E controllers, the P35 and X38 controllers, even the 965 and 975 distinguished themselves in this way. This has always been a marketing rather then technical limitation.

My understanding also is that it allows a 6 layer PCB to be used instead of an 8 layer usually found on the X58 motherboards. the more layers in a PCB the more expensive the board is to build.

The reason for using more layers is that allows the engineers to run the different tracks or lanes on the motherboard at different layers thus reducing any interference between the individual tracks/lanes and allowing them to be run closer horizontally as there would be a vertical separation. With a 6 layer PCB then there may not be enough spacing available to place the extra PCI-E lanes on the board, without getting interference from one circuit to another. Apparently some of the high end P55 boards will feature an 8 layer PCB, although they will cost more. Apparently some of the x58 boards have a 12 layer PCB to improve the separation between the circuits on the motherboard.

Reducing the number of layers allows the overall cost of the system to be reduced as the motherboard is less complex. However there is no technical reason why could not just add more layers to the PCB, although this then pushes the pricing closer to the X58 boards. One of the reasons that the Compaq's were so expensive was that they ran a higher number of layers on the motherboards so they were more more expensive to manufacture.

The only technical reason that you couldn't change would be if this is due to the number of connection points available on the socket. I don't know if there are physically enough points in an 1156 socket to have the extra 16 PCI-E lanes connect into the onboard PCI-E Controller. Whilst there has been a jump form 775 to 1156, the CPU now has the memory controller and PCI-E controller built in and so there may not be physically enough connections available left to fit the extra lanes.

Yes Intel could have added more silicon and connection points to allow the extra in technically, however would just push the product ever closer to the 1336 socket systems.

Regarding the name of the physical circuitry across the motherboard linking the memory slots to the memory controller, and the graphics cards to the PCI-E controller, then I have always heard of them referred to as tracks or lanes.
 
Soldato
Joined
25 Sep 2009
Posts
8,413
Location
Billericay, UK
Doh :( Thanks for the info, thought I'd be able to get around it by using i7's :(

[edit - deleted my 2nd question : Its answered in the post above]



Thanks.

You don't need to 'get around it', 8x provides ample bandwidth even when running cards like the HD5870 in Crossfire, see here.

According to a study at techpowerup you need to go down to around about 4x bandwidth before you start see any kind of drop off in performance and even then at 4x the drop is only a few %. The other factor you have to consider is what resolution your playing at, according to these benchies you need to running at 2560x1600 to really tax the bandwidth.

The bottom line is don't worry about the 'only' having 8x bandwidth when running Crossfire as your not going to bottleneck anything. You could still be running an older PCI-Ex x16 ver 1.0 with a Radeon HD5870 installed and you would have no concerns.
 
Underboss
Joined
16 Jun 2009
Posts
7,670
Location
Cambridge
I suspect that there aren't enough spare pins in the LGA1156 socket to support another 16 PCIex2 lanes, so additional lanes could not be added to the CPU. The DMI link would be utterly saturated by trying to add PCIe to the P55 chipset itself. So I really don't see P55 boards ever having more PCIe lanes, except by adding a PCIe switch, but that's not really adding more lanes, that's just a way of sharing lanes.

Intel didn't deliberately hobble the chipset, this is all in the name of driving down costs. 2 16way PCIe2 slots running at 8 way is enough for any current SLI or crossfire setup using single GPU cards, and the mainstream market is not about tri or quad SLI. The mainstream buyer doesn't want to buy the more expensive and less energy efficient x58 platform when they won't be taking advantage of it.
 
Caporegime
Joined
26 Dec 2003
Posts
25,675
Does anyone know if this is a P55 chipset limitation or just due to the current on-chip PCI-E2.0 controller being x8/x8 in Crossfire?

i.e in theory could a new LGA1156 chip feature x16/x16 in Crossfire? if so could the P55 support this? :confused:

As far as I can figure it's a bandwidth limitation.

The PCI-E DMI link on Lynnsfield's has 8GB/s bandwidth, contrasted with Bloomfield's which have a QPI link with about 3 times more bandwidth.

The "P55 chipset" is a bit of a deception, they have just removed the X58 chipset (which controlled PCI-E) and PCI-E functions are now integrated into the CPU, so you just have the Lynnfield CPU connected to an ICH10R southbridge with the last of the northbridge functions (PCI-E) now inside the CPU.
 
Last edited:
Associate
Joined
17 May 2004
Posts
1,585
This DMI link at 8Gbs I take it is the internal link from the PCI-E controller to the CPU, as the DMI to the P55 from the CPU is only 2GBs.

Does this DMI link need access to external connections ( would PCI-E to CPU need to go external) as if they added the external PCI-E labe connectors (if spare available) then surely they could expand the DMI internal link in the silicon as well. Obviously would take up more real estate, however with a die shrink to 32nm then there should be more physical space available? I guess it would depend on the interference for the connecting lanes in the silicon as to wether they could add the extra connections internally.

However apart from the lack of PCI-E lanes on the motherboards I see no technical reason why if did that as well as adding the extra connectors and internal link, that couldn't support with the P55 chipset, as the DMI link from the CPU to the P55 would not need to change.

It does however lead me to ask this question, possibly slightly off topic but seems related to me.

We are seeing cards like the EVGA FTW200 appear with the nf200 PCI-E switch. This connects to the CPU at x16 whilst providing either 2 x16 slots ot 4 x8 slots. Clearly it doesn't provide any extra bandwidth upto the CPU, however, it does allow more bandwidth between the two GPU cards, which I don't believe traffic between the GPU cards would have to go to the PCI-E controller on the CPU, thus leaving the single x16 slot upto the CPU free for data moving from the GPU's to the CPU and vice versa.

Would this provide us with sufficent bandwidth for a proper x16 x16 , or is my understanding of the nf200 and data transfer wrong.

Having seen the price of the EVGA FTW200 then I would be inclined to get an x58 system for now anyway, but am wandering technically would this be a good move with the nf200.
 
Caporegime
Joined
20 Jan 2005
Posts
43,267
Location
Co Durham
More to the point the p55 platform gets pci-e 3 first. How is that going to be implemented?

Does this mean you will get the physical connection to run a pci-e 3 card but no extra bandwidth?

Pretty pointless bringing it out on the p55 platform first in that case. :confused:

A good explaination here btw if it hasn't already being posted:

http://www.tomshardware.com/reviews/core-i5-lynnfield,2379.html
 
Last edited:
Top Bottom