• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ivy Bridge to use 3D 22nm transistors

It dose seems like Intel are on a roll with all these new processor releases.

They seem to be announcing something new soon to be released all the time making sure they widen the performance gap even more between
them and AMD, in a attempt to make sure AMD do not steal the performance crown from them.
 
Last edited:
The Financial Times are calling this the biggest breakthrough in microprocessor design in over 50 years. They're not normally given to overstement, thoughts?
 
lol Intel likes to use antec 900 case !!

They also like to demonstrate cutting edge CPUs by serving up webpages, playing Hi-Def video, and by describing the graphics and physics of a game, seriously that man must have presented the most pointless video ever :o
 
They also like to demonstrate cutting edge CPUs by serving up webpages, playing Hi-Def video, and by describing the graphics and physics of a game, seriously that man must have presented the most pointless video ever :o

Lol yeah

He should have ran Intel Burn Test!!

I think he is the kind of guy who may have extensive knowledge of cpu architecture but in real world lacks any understanding on how to make use of cpu properly or benchmarking.
 
AFAIK rather than being 3d as in what people always talked about for 3d, IE stacking transistors on transistors for huge transistor density. This is merely(though not bad at all) a transistor on its side, which due to the space saving allows more gates to be used rather than more transistors.

Its pretty good but 50% power saving dropping from 32nm to 22nm is, well, expected, though its getting harder to get the same power savings you'd get from say 130nm to 90nm.

Real 3d(stacking, silly transistor density increase) is still to come, if its possible.

There's pretty much a new fancy name for whatever new method they bring in every gen.

They also like to demonstrate cutting edge CPUs by serving up webpages, playing Hi-Def video, and by describing the graphics and physics of a game, seriously that man must have presented the most pointless video ever :o

Its the way CPU marketing is going, 99% of buyers don't care how fast it folds, or encodes, or pretty much anything apart from game, bring up web pages and accelerate high def content. Its what 99% of Dell/HP/Acer/Asus/Apple buyers actually do with their computers.

Thats why AMD doesn't have to compete on the CPU(they still will) to "win", because in the same things that Intel themselves are talking about their CPU's for, the GPU is massively involved, and a AMD cpu + gpu will beat anything Intel's got in most of these area's. AMD are doing great, no matter the process as frankly the game is moving from CPU performance to overall performance in what 99% of people do day in day out.

Intel just have a pretty easy time of it, because of the process lead they can just pump out smaller chips that CPU wise beat AMD easily and for years have been able to charge more. IE they just make huge margins compared to AMD, thats the biggest advantage on processes, not flat out performance but being a process ahead of everyone else means you can make similar things to everyone else, at half the size, higher yields and way higher profits.

If Intel/AMD market share stabilised at 50% each, Intel would probably still make 4-5 times the profit AMD could.
 
Is real transistor stacking going to prove too expensive to put into production though? I mean, with this method off CPU construction and reducing it to 22nm, you have a massive reduction in construction sucess rate. Stacking would only make the problem worse. I also heard that Intel announced they are already working on a 14nm version of this.
 
Thats why AMD doesn't have to compete on the CPU(they still will) to "win", because in the same things that Intel themselves are talking about their CPU's for, the GPU is massively involved, and a AMD cpu + gpu will beat anything Intel's got in most of these area's. AMD are doing great, no matter the process as frankly the game is moving from CPU performance to overall performance in what 99% of people do day in day out.


I agree.

Unless Intel start manufacturing their own GPUs on the same level as NVidia or AMD, they will get beaten overall performance wise especially in gaming and 3d rendering. By this I strictly mean using only AMD CPU + GPU or Intel CPU + Onboard GPU products.

Although both CPUs and GPUs aren't directly comparable, but one thing both can do is floating point calculations (making use of real numbers as opposed to integer calculations); well floating point unit (FPU) in the case of processor.

If I am correct computer games also make heavy use of floating point units calculations. The GPUs excel in this area compared to CPUs and if you pitted AMD HD6990 against Intel's HD 3000 integrated GPU (latest), well the latter will get obliterated as there is no competition.

Between AMD and Intel;

CPU wise Intel is king

GPU wise AMD is king

Looking at overall performance, the heavy usage of GPU will give AMD the edge. So I agree with drunkenmaster that when taking into account CPU+GPU, AMD will beat Intel in most areas:cool:


Edit: Here is a comparison of different Gpus by different manufacturers. You can get an idea on how far Intel are behind in graphics cards

http://www.tomshardware.co.uk/gamin...n-hd-6870-geforce-gtx-570,review-32094-7.html
 
Last edited:
Is real transistor stacking going to prove too expensive to put into production though? I mean, with this method off CPU construction and reducing it to 22nm, you have a massive reduction in construction sucess rate.

They made the SRAM chip in 2009 and claim this only costs 2-3% more per wafer so I'm guessing it's pretty solid by now.
 
Back
Top Bottom