• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

OK, who's waiting for Nahalem?

I think im going to run a set of benchmarks (PC mark + 3DMark or something) on my system before i upgrade - then perform the exact same benchmarks AFTER the upgrade...

Should be quite funny to see the gains! (i'm thinking... about 10x the performance across the board lol)
 
no thats not right, 775 was ddr2 only from day one which was something like 2003. Not sure what made the price collapse tbh, I expect it was some sort of improvement to the fabbing process.

Then explain why my mum's P4 (LGA775 - I know, I've tinkered) PC has 512MB of DDR...
 
no thats not right, 775 was ddr2 only from day one which was something like 2003. Not sure what made the price collapse tbh, I expect it was some sort of improvement to the fabbing process.

errm, i did say from what i recall it was only ddr2 so not sure exactly what you're disagreeing with. Either way it wasn't fab process's that make it cheaper. Theres no real fundamental difference between ddr1 and 2 in so far as the silicon, or fab process or anything. Its simple when you make 5 million ddr chips a year and 10,000 ddr2 chips, the ddr2 chips cost a lot more. SAy a fab can make 100,000 ddr chips a day, but the machinery needs to be reset with different settings and so on, it all needs to be shut down, changed over and started back up to make something different. They make more money by staying with the one process, to make it worth there time to make DDR2 it has to sell at a cost that essentially compensates for all that downtime which is all lost earnings. So the DDR2 cost was bumped up, now this cost was incredibly because of such a small number of DDR2 chips being made. So say in that day the choices are, 100k DDR1 chips, or with all the downtime of changing over, 20k DDR1 chips and 20kDDR2 chips. the price of the DDR2 chips HAS to match up to cover the missing 60k chips that couldn't be made. THey obviously can't pass this cost onto the 20K ddr1 chips as they already have an established price.

Essentially the same situation is happening here. while 775 was predominately ddr2(theres always been barsteward offspring chipsets that offer weird compatibilities) there was more plentiful cheaper ddr1, 478 mobo's and chips and they offered identical performance at the time 775 came out. SO ddr2 stayed expensive as like now, the option was there but it was needlessly expensive and no one gave a crap.

Then c2d came out, no longer was the 478 option there, it was slower and phased out. So upgrades weren't a choice anymore, everything new was 775, and DDR2. That meant Dell, Acer, Asus, laptops, desktops, servers worldwide were sold with DDR2, production went through the roof and costs came tumbling down. Remember DDR2 prices weren't great until the last 6 months, and C2D has been out for coming up on two years. It was about July 06 i think the E6300/6600 came out, which was some 2 years at least after DDR2 was available and even then a decent set of 2GB ddr2 was £150-200.


Nehalem will be to DDR3 what C2D was to DDR2. IT will START the production ramp up and costs coming down, but it won't be massively cheap for a long while after that. C2D might have been fast, but P4's outsold it 100 fold for the first 4-6 months because thats how the industry works.

in short, DDR3 won't be cheap, at all, when it comes to Nehalems launch, it will be cheaper than now, but not by much, 3-4 months after it launches we'll see a slight drop, but it will be a year or two before it becomes insanely cheap like DDR2 is now.

Also remember Nehalem is launching at the high end, with mid/low end months later. Which means tiny sales for a while, the £700 chips, like with gpu's the high end makes up less than 1% of the market. Only when the lower priced chips are out will DDR3 really go into mass production.
 
Last edited:
to be honest, intergrated gpu's won't be anything special for a long while to come. When we hit say, at least the 16 core, or maybe 24/32 core stage only then will it be suitable to drop in multiple "gpu" styled cores for speed. Where you might get a 31 cpu cores + one gpu for desktop for ultimate encoding, rendering or server performance. but there could be 16/16 versions with 16 cpu/16 gpu cores for a fantastic gaming chip. But with only 4 or 8 cores you're droping a significant amount of power to drop in gpu cores.

I don't think we'll get a big boost in cache from the next die shrink either, cache is an exponential usage situation. say the prediction unit predicts this particular thread might need one of 2 different bits of info so it stores them, then each or those might use 2 after that, and each of those 2 after that you end up with , 2-4-8-16-32-64 etc bits of into ready to be streamed into the core. With relatively high latency access to the memory but not realistically hugely less bandwidth, the key is to be constantly preaccessing that info rather than constantly waiting. So the more steps you store in cache you need exponentially more cache. But when you cut that latency beyond the 3rd or 4th step where you'd need to stream in silly amounts of extra data, its simply quicker to access the memory with an onboard mem controller. So large cache is simply not needed in the slightest on a chip of that design. Really an L3 big enough to store the next 3 bits of info is more than enough and really, that L3 design is going to be based more on number of cores. more cores more data it can get through, give it a proportional increase in L3 cache.

One thing i dislike about ath 64 and onboard mem IS the lack of motherboard upgrades. 965 was decent, but overclocking on the p35 is noticably a lot more reliable in regards to getting much higher. When you have an onboard mem controller tweaks to that controller come out VERY rarely and mostly with a big upgrade. For instance, if the Phenom's mem controller is whats holding it back right now, as say a 790fx board can do 300-350Htt with a X2 easily, but 220ht is hard for a phenom. A simple new spin of a VERY simply northbridge could be a simple quick fix. But we had to wait a heck of a lot longer for the B3 silicon from AMD and even then theres nothing known about its HTT overclocking ability. Through my fairly long use of the 939 there was only really the NF4 available, which I didn't really like. Nvidia had no reason to upgrade it, there was really very very little to upgrade as 99% of the performance was down to the chips not the northbridge version.

An intergrated memory controller works very well on the northy, gives more than enough bandwidth. A simple move from a traditional Fsb quad pumped thing from Intel to something more updated and faster along with triple channel memory would have been far easier, cheaper, faster and easier, and allow multiple versions of motherboards and ddr2 use would have been easy to put in.

I'm not a fan of the mem controller as i think its hurt AMD quite badly in some cases, though helped in others. At the end of the day, if intel or amd make a crappy design with poor overclocking a new mobo chipset won't help with that, and a long wait for a better clocking chip is in order. Also, the IMC uses a fair amount of heat and you are moving it onto the cpu. Cooling a AMD chipset is incredibly easy as theres just no heat there. Intel have difficulties there, we'll see if it works out or not.

EDIT:- just checked the likes of Dell and on their highest end gaming system, no sign of DDR3 anywhere at all, even on their 2.3k gaming system with 2gigs of mem its £90 to upgrade to 4gigs of ddr2. Which to be honest is fantastic in Dell's terms, nto long ago it would be £300 to have 4gigs of ddr2 in one of their systems, it just shows how cheap ddr2 has gotten.

The thing that brought about cheaper ddr2, as in the drop from £200-250 ddr2 kits down to £100-150, was when 775 came out with from what i recall only ddr2 use. Even then, if i recall correctly, it was expensive until Dell and the likes started to sell more 775 systems and eventually ddr2 prices dropped. We won't hit the point where DDR3 is actually needed, until a chip that only supports it comes out, that IS Nehalem and DDR3 won't be cheap till after that happens. 1 in every probably 10000 systems sold uses ddr3 at the moment, memory makers just aren't making it as no one really wants it. When it shifts as from ddr/ddr2 to higher production which will only happen when demand goes through the roof, prices will drop and fast.

Your argument regarding DDR3 is valid and sound in the usual financial sense, but look at the prices, 2 gigs of DDR3 is sub-100£ and it has a lot of time to lower even more before Nehalem gets here. Why ignore what the memory manufacturers themself are saying, that prices will get evened out this year?
 
Hm, actually, think of it more like this - January last year, DDR2 was about £100 for a set of 2GB PC-6400, by November it was around the £30 mark. You can pick up 2GB of DDR3 for about £100 now, wouldn't surprise me if by November it was around £40-50 in the least, by March next year it'll probably be a little cheaper than DDR2.

That and, LGA775 definitely is not DDR2 exclusive. You can spin that as much as you like, but I actually have an LGA775 Pentium 4 Prescott (2.9GHz, 515) system with 512MB of DDR running in it. The only platform that is DDR2 exclusive is AM2, and that's because of the Athlon 64's memory controller.
 
Yeah DDR3 prices are getting better @ 99 quid for 2 gig is more then enough now.

Reckon we just need Octocore and be done with it :)

By then were gonna see Quad GPUs and frankly we already are.
 
You know - GPUs such as nVidia's G80 architecture are already lots of small processors on one die - it's already very modular; why do we need to split them up any more? Aren't they modular enough? I don't think multi-GPU is the way forward, it's the lazyman's way of getting a cheap release with half-decent performance.
 
I went from P4 570 to X2 4400 to E6700 to a Q6600. Each one gave me a massive boost from the previous. Don't see anything doing that to my Q6600 for some time tbh, maybe nehalem... maybe not!

Are you saying you got a massive boost from an E6700 to a Q6600. How come? I was thinking of ebaying my E6600 and getting a either a Q6600, or an E8400 Wolfdale. However, there are so many different opinions to whether it is worthwile, I am now unsure.

I am running:

EVGA 780i (680i failed)
2 x BFG 8800GT OC2 (SLI)
Play IL2 Forgotten battles.
:confused:
 
You know - GPUs such as nVidia's G80 architecture are already lots of small processors on one die - it's already very modular; why do we need to split them up any more? Aren't they modular enough? I don't think multi-GPU is the way forward, it's the lazyman's way of getting a cheap release with half-decent performance.

Very true, GPU's need more shaders and quicker memory, only way forward, power hungry double card solutions won't make anyone happy.
 
Quad Gpu's are quite frankly useless unless your doing rendering or modeling and since Nvidia doesnt seem to care for their customers now quad Sli is a no go. Atleast ATi care about most of there customers I mean really who would win with quad Gpu in a game Ati or NVidia?
 
Quad Gpu's are quite frankly useless unless your doing rendering or modeling and since Nvidia doesnt seem to care for their customers now quad Sli is a no go. Atleast ATi care about most of there customers I mean really who would win with quad Gpu in a game Ati or NVidia?

Nvidia based on the current technology, ATI based on current drivers as far as i know. Doesn't matter anyway, they're patchwork solutions designed to last until ATI can get a working next-gen card and Nvidia can try to beat that.

Quad core CPU's are not irrelevant by any means, many games use dual cores now, and they will be optimized for quad cores, Supcom for example does a good job at this. Of course your standard FPS won't see much of an imporvement, this is mainly strategy games for now. No one has ever forced anyone to buy a quad cpu, if you don't see a use for it, don't get it. That doesn't make the technology any less exciting.
 
Nvidia based on the current technology, ATI based on current drivers as far as i know. Doesn't matter anyway, they're patchwork solutions designed to last until ATI can get a working next-gen card and Nvidia can try to beat that.

Quad core CPU's are not irrelevant by any means, many games use dual cores now, and they will be optimized for quad cores, Supcom for example does a good job at this. Of course your standard FPS won't see much of an imporvement, this is mainly strategy games for now. No one has ever forced anyone to buy a quad cpu, if you don't see a use for it, don't get it. That doesn't make the technology any less exciting.

Agreed very much. If you know that the what you're gonna be using the processor for isn't really going to be helped by a quad, what's the point of you buying a quad? Good question!

Well, technology, much less the world, does not revolve around you or your personal needs. Some of us might want to render 3D animations, might feel that a shorter decompression time when using WinRAR will hit the spot, encode audio or video on a regular basis. Well, I don't, I just use my computer for browsing the internet, college work, light gaming and the occasional bit of hobbyist programming and some various other misc tasks, but guess what? I bought a low end dual core because that's really all I need. I'll probably get a nice Nehalem based PC once it's out and at the right price because I feel I'd appreciate the speed increase.
 
Well, i`ll be upgrading from a P4 Prescott 3.20E 478, witch is what im using now and have been for the past year and a half, so I should definitely see a descent speed boost across the board with Nehalem.
 
Last edited:
Well, i`ll be upgrading from a P4 Prescott 3.20E 478, witch is what im using now and have been for the past year and a half, so I should definitely see a descent speed boost across the board with Nehalem.

You should indeed - I saw a huge speed boost going from a Pentium III 1.13GHz tualatin to a 1.6GHz Pentium Dual Core E2140. :p
 
Are you saying you got a massive boost from an E6700 to a Q6600. How come? I was thinking of ebaying my E6600 and getting a either a Q6600, or an E8400 Wolfdale. However, there are so many different opinions to whether it is worthwile, I am now unsure.

I am running:

EVGA 780i (680i failed)
2 x BFG 8800GT OC2 (SLI)
Play IL2 Forgotten battles.
:confused:

ok maybe not massive but with vista 64 and 4gb of ddr2 everything does feel a lot more instant, I don't really seem to be able to do anything that makes it anything other than totally responsive.

Plus given the price you might as well have the spare cores especially when its so clockable. I'm not convinced I'd see any boost from getting a faster quad, maybe nehalem is the time with a new architecture and all the fun that will bring.

Personally having 4 cpus that are each capable of hitting 1MB scores of about 14 seconds on super pi whilst the other 3 are doing other stuff when 18 months ago I had an X2 that needed phase cooling to get it under 30 seconds tells me I've had a big boost (and yes I know superpi isn't the be all and end all but its a good quick and dirty test of raw speed).

Trying to whittle that down a bit with cpu changes etc isn't a big deal for me so nehalem is the next big chance I have for a big boost. Unless AMD manage something ridiculous and make something that can actually compete.
 
I have promised myself I am not upgrading my system until Nehalem. I often forget that my PC is already fast as *&^% off a shovel anyway and I really don't need to keep spending money! Going to wait for Nvidia 9900 and ATi 4800 as well...trying to make sure when I do upgrade, it's a decent performance increase.
 
Back
Top Bottom