Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
no thats not right, 775 was ddr2 only from day one which was something like 2003. Not sure what made the price collapse tbh, I expect it was some sort of improvement to the fabbing process.
no thats not right, 775 was ddr2 only from day one which was something like 2003. Not sure what made the price collapse tbh, I expect it was some sort of improvement to the fabbing process.
to be honest, intergrated gpu's won't be anything special for a long while to come. When we hit say, at least the 16 core, or maybe 24/32 core stage only then will it be suitable to drop in multiple "gpu" styled cores for speed. Where you might get a 31 cpu cores + one gpu for desktop for ultimate encoding, rendering or server performance. but there could be 16/16 versions with 16 cpu/16 gpu cores for a fantastic gaming chip. But with only 4 or 8 cores you're droping a significant amount of power to drop in gpu cores.
I don't think we'll get a big boost in cache from the next die shrink either, cache is an exponential usage situation. say the prediction unit predicts this particular thread might need one of 2 different bits of info so it stores them, then each or those might use 2 after that, and each of those 2 after that you end up with , 2-4-8-16-32-64 etc bits of into ready to be streamed into the core. With relatively high latency access to the memory but not realistically hugely less bandwidth, the key is to be constantly preaccessing that info rather than constantly waiting. So the more steps you store in cache you need exponentially more cache. But when you cut that latency beyond the 3rd or 4th step where you'd need to stream in silly amounts of extra data, its simply quicker to access the memory with an onboard mem controller. So large cache is simply not needed in the slightest on a chip of that design. Really an L3 big enough to store the next 3 bits of info is more than enough and really, that L3 design is going to be based more on number of cores. more cores more data it can get through, give it a proportional increase in L3 cache.
One thing i dislike about ath 64 and onboard mem IS the lack of motherboard upgrades. 965 was decent, but overclocking on the p35 is noticably a lot more reliable in regards to getting much higher. When you have an onboard mem controller tweaks to that controller come out VERY rarely and mostly with a big upgrade. For instance, if the Phenom's mem controller is whats holding it back right now, as say a 790fx board can do 300-350Htt with a X2 easily, but 220ht is hard for a phenom. A simple new spin of a VERY simply northbridge could be a simple quick fix. But we had to wait a heck of a lot longer for the B3 silicon from AMD and even then theres nothing known about its HTT overclocking ability. Through my fairly long use of the 939 there was only really the NF4 available, which I didn't really like. Nvidia had no reason to upgrade it, there was really very very little to upgrade as 99% of the performance was down to the chips not the northbridge version.
An intergrated memory controller works very well on the northy, gives more than enough bandwidth. A simple move from a traditional Fsb quad pumped thing from Intel to something more updated and faster along with triple channel memory would have been far easier, cheaper, faster and easier, and allow multiple versions of motherboards and ddr2 use would have been easy to put in.
I'm not a fan of the mem controller as i think its hurt AMD quite badly in some cases, though helped in others. At the end of the day, if intel or amd make a crappy design with poor overclocking a new mobo chipset won't help with that, and a long wait for a better clocking chip is in order. Also, the IMC uses a fair amount of heat and you are moving it onto the cpu. Cooling a AMD chipset is incredibly easy as theres just no heat there. Intel have difficulties there, we'll see if it works out or not.
EDIT:- just checked the likes of Dell and on their highest end gaming system, no sign of DDR3 anywhere at all, even on their 2.3k gaming system with 2gigs of mem its £90 to upgrade to 4gigs of ddr2. Which to be honest is fantastic in Dell's terms, nto long ago it would be £300 to have 4gigs of ddr2 in one of their systems, it just shows how cheap ddr2 has gotten.
The thing that brought about cheaper ddr2, as in the drop from £200-250 ddr2 kits down to £100-150, was when 775 came out with from what i recall only ddr2 use. Even then, if i recall correctly, it was expensive until Dell and the likes started to sell more 775 systems and eventually ddr2 prices dropped. We won't hit the point where DDR3 is actually needed, until a chip that only supports it comes out, that IS Nehalem and DDR3 won't be cheap till after that happens. 1 in every probably 10000 systems sold uses ddr3 at the moment, memory makers just aren't making it as no one really wants it. When it shifts as from ddr/ddr2 to higher production which will only happen when demand goes through the roof, prices will drop and fast.
I went from P4 570 to X2 4400 to E6700 to a Q6600. Each one gave me a massive boost from the previous. Don't see anything doing that to my Q6600 for some time tbh, maybe nehalem... maybe not!
Then explain why my mum's P4 (LGA775 - I know, I've tinkered) PC has 512MB of DDR...
You know - GPUs such as nVidia's G80 architecture are already lots of small processors on one die - it's already very modular; why do we need to split them up any more? Aren't they modular enough? I don't think multi-GPU is the way forward, it's the lazyman's way of getting a cheap release with half-decent performance.
Quad Gpu's are quite frankly useless unless your doing rendering or modeling and since Nvidia doesnt seem to care for their customers now quad Sli is a no go. Atleast ATi care about most of there customers I mean really who would win with quad Gpu in a game Ati or NVidia?
Nvidia based on the current technology, ATI based on current drivers as far as i know. Doesn't matter anyway, they're patchwork solutions designed to last until ATI can get a working next-gen card and Nvidia can try to beat that.
Quad core CPU's are not irrelevant by any means, many games use dual cores now, and they will be optimized for quad cores, Supcom for example does a good job at this. Of course your standard FPS won't see much of an imporvement, this is mainly strategy games for now. No one has ever forced anyone to buy a quad cpu, if you don't see a use for it, don't get it. That doesn't make the technology any less exciting.
Well, i`ll be upgrading from a P4 Prescott 3.20E 478, witch is what im using now and have been for the past year and a half, so I should definitely see a descent speed boost across the board with Nehalem.
Are you saying you got a massive boost from an E6700 to a Q6600. How come? I was thinking of ebaying my E6600 and getting a either a Q6600, or an E8400 Wolfdale. However, there are so many different opinions to whether it is worthwile, I am now unsure.
I am running:
EVGA 780i (680i failed)
2 x BFG 8800GT OC2 (SLI)
Play IL2 Forgotten battles.
![]()