• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Excuse my simple logic...

Associate
Joined
13 Sep 2009
Posts
1,612
Location
Maidstone, Kent
This may sound a bit troll sciencey to you, but to me it seems to make sense. After building main AMD systems, I built my first Intel system at the weekend with an i5 Sandybridge CPU. What struck me was the physical side of the chip- it was really small in comparison to my Phenom II.

So, if the power of a CPU is determined by the number of transistors on the die, which is (other than reducing power consumption) why smaller manufacturing processes are better, why not just make a bigger chip so that you are able to have a higher number of transistors, seems common sense to me but I'm sure there is a good reason why this doesn't work!

Problem, science? :P
 
I've asked this question before somewhere and it was explained to me. I can't remember now though.

I'm not much use am I :(
 
You can make a chip more parallel by throwing transistors at it (ie you do more operations with each op taking as long as it did before).

You can't make a chip more efficient at performing single operations by throwing transistors at it. Essentially, you have to engineer new architectures to make things more efficient. I imagine it's a bit like finding new software algorithms to make your code faster.
 
FoxEye speaks wisdom; more transistors doesn't really translate to linear processing speed :)

Though there is the option of ever more specialised areas of the CPU for specific tasks - for example the video transcoding and encryption blocks that have gone into recent chips. The down side is that you're trying to predict what people will want for the next N years, and if the technique changes slightly your fancy hardware is wasted, or worse, a handicap.

And while I'm sure Intel could build a 64 core processor if they cared to, I imagine it'd suck about 2 kilowatts, and have a heat density that would melt conventional sink alloys :D
 
Hi there,

One thing that is probably worth mentioning is that the physical CPU package (squarish thing with a metal top and a bunch of pins/pads on the bottom) is much larger than the actual CPU (a postage stamp size of silicon packed with transistors held safely within the CPU package). The CPU package size is mainly determined by the number of interface pins and not the size of the true CPU within it.

As for increasing the size of the CPU die itself, it is true that the 32nm Sandy Bridge quad core dies are smaller than the 45nm Phenom II Deneb dies - as you would imagine (216mm^2 vs 256mm^2). As others have said - it would increase parallel processing performance by packing on more transistors (more cores) but it wouldn't have much effect on single threaded performance when used with the same architecture (unless you increase the clockspeed, which reduces the energy efficiency and greatly increases the heat output) and most architectures are not fully modular - so you can't just add a new set of cores on without a redesign.
 
To give Gundog48 a positive note - linearly increasing massive parallelism does work in the graphics industry.

In it's simplistic explanation - GPUs do not have to deal with many conditional statements (predictive branching), drawing a line from A to B can be split into n number of computational mathematical problems and assigned to n number of transistors.

You can see this history of bigger and faster in the growth of nVidia's chips from the original Geforce NV10 to the current NV110 series.
 
There is also the physical side of things.

If you double the size of a chip you will only be able to produce half as many on each silicon wafer.

These chips will also have twice as many defects due to the doubled surface area.
(Not all defects cause a dud chip as redundancy is built into the design)

From a single silicon wafer you first of all get half the number of chip candidates then add to this a significantly increased number of defective chips and the outcome is large processors are much more expensive to manufacture.
 
So essentially:
  • It would increase the number of operations that could be carried out per clock cycle
  • The clock speed of the CPU would be unaffected or dropped
  • It would not run anything faster, but could theroetically do more at once
  • It throws up loads of manufacturing issues such as more room for error and architecture design

I think I've got the gist of it! Quite interesting stuff for someone who has very limited knowledge of CPU architecture!
 
There is also the physical side of things.

If you double the size of a chip you will only be able to produce half as many on each silicon wafer.

These chips will also have twice as many defects due to the doubled surface area.
(Not all defects cause a dud chip as redundancy is built into the design)

From a single silicon wafer you first of all get half the number of chip candidates then add to this a significantly increased number of defective chips and the outcome is large processors are much more expensive to manufacture.

This is the main reason, although not exactly as put.

If you double the die size, yields go down on an exponential curve, not linearly.

Take a wafer, say it costs $7000 to make, lets say a smaller die gets 200 potential cores off a wafer, and an 80% yield, so you're getting 160 working dies off a wafer, $44 a chip.

If you make that die twice as big, you instantly drop the potential cores to 100, but yields will also decrease significantly. In the first case 40 cores don't work, in the second case, you'll still get 40 cores not working, but that itself brings the yield down to 60%. So you're now at 60 working cores, but the same wafer cost, or $117 a chip ;)

Double die size, triple the cost, its not good, it limits your market, thats at cost, with no profit, they need to make profit to cover R&D on the chips ANd R&D on the process.

Other things, a lot of the size difference is just a bigger heatspreader, not necessarily the size of the die underneath.

What else, a Sandybridge uses 95W under load, its already looking like Sandybridge-e, not altogether that much bigger, will use 130W as its TDP, but has been said to unofficially use quite a bit more on top of that.

Different parts of the core use different amount of power.

If you went with an 8 core Sandy, with only a dual channel memory controller, would the cores become severely limited by bandwidth, its quite likely.


Back to manufacturing theres one other quite simple problem, if you go from producing 160 cores per wafer, to 60 cores per wafer, you've cut the number of CPU's you can produce a year by around 60%. If the world "needs" 100mil cpu's from Intel every year to make the world go around, well, Intel can't just produce 40million.

In terms of single thread performance and clock speeds vs die size, you really can throw transistors at both problems, but due to all the previous problems, the best you can get at the smallest die size is the best overall design method in the silicon industry.

If AMD chips were 30% slower, but also 30% smaller and they could make 30% more chips due to that size, then selling them 30% cheaper than Intel chips would make them a killing, selling bigger chips that you can make less of, for less money than Intel isn't the best method.
 
If you make that die twice as big, you instantly drop the potential cores to 100, but yields will also decrease significantly. In the first case 40 cores don't work, in the second case, you'll still get 40 cores not working, but that itself brings the yield down to 60%.

I'm not convinced this is quite right, why would there still be 40 failed cores? For example, if two neighbouring original sized cores had defects, this would only be one failed core in the new size. My brain hurts too much to work out the maths, but the result must be somewhere between 60% and 80%.

Edit:

It's far worse than I thought....

"A really rough measure of yield is that for similar products, the yield goes down by the square of the die size."
http://semiaccurate.com/2010/02/17/nvidias-fermigtx480-broken-and-unfixable/

This comes out as a 40% yield!
 
Last edited:
To give Gundog48 a positive note - linearly increasing massive parallelism does work in the graphics industry.

I thought I will give your words some more meaning:

Nvidia 6800
6800gs_1.jpg


Mars II Dual GTX 580
asus-mars-ii-gpu.jpg
 
Back
Top Bottom