• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will Future Games need faster CPU, or more "Cores"?

A little from column A and a little from column B, but assuming column B is more cores, then more column B than A... the problem with silicon is not just heat but also that as they pack more transistors into the same space, they need to shrink the manufacturing process, this results in more wasted silicon as well as there currently being a physical limit to how small a track of silicon can be before it simply breaks down when a current is passsed through it (cant remember the exact figure but it isnt far from the 45nm we are currently working at). There are people looking at how to alter silicon to become more resistant, and of course what new materials we could use, but at the moment I dont see anything so compelling that I would place all my chips on it.

In the long run, Its not just multiple CISC cores that the big boys are concentrating on, rather they are more likely to produce Cell style processors, even Intel.

Even now, their "lab baby" is an 80 core Cell style processor. In a similar manner to a GPU it has 80 individual simple processing "cores", each has a pre-determined processing task it is designed for and isn't astonishingly fast in itself. But as a conglomerate the processing potential of the chip is very high.

Like processing on the GPU (CUDA, OpenCL etc), they are also looking at how to best utilise this parallelism, using things like hardware thread schedulers. In theory meaning the programmer becomes detached from how their work is being parallelised once more (people dont like programming in parallel, its a bit of a PITA :))

The nice thing of course is that each of these "cores" can pump through data at a slower rate than say a 3Ghz core2 does at the moment, which means they can be made smaller, which means more can be packed into the same die space.

I am almost 100% convinced massive parallelism is the way forward nd what we will see the big players starting to really sell soon. no doubt though that they will continue to push the boundaries of the more traditional multi core CISC processors, i reckon they will progress to at least 8 CISC cores before they turn down their next path.

Watch out for GPU processing as well, NVIDIA is really pushing the concept of using their chips for processing at the moment, and big players like Adobe have begun to pick up on it, expect to see new versions of photoshop with either CUDA or OpenCL support soon i reckon, there is definately something compelling about having a dedicated highly parallelised processing engine within a host system that is there purely to accept high load computation from the host machine. Specially given the cost of the mass produced cards...
 
Last edited:
the main problem atm is that chips are approaching the physical limits of clockspeeds due to heat etc. the main push in computational physics atm is multithreading and provide programs that fully utilise more cores. parallel computation is the future; just look at the first quantum computers. everything will be multithreaded naturally in the future allowing use of god knows how many cores.
 
Can i say both? more cores in the short term would be my guess, or better use of the number of cores currently on offer.

With new materials however, surely clock speeds will be increased. It's insane to think that clock speeds have come as far as they can; on a silicon chip and at a decent power draw though, maybe.
 
more cores. look at GPU's with stream processors which act like independant gpu's. HD4000's high end have 800, so i say more cores, speeds with go slightly higher but as said it's all down to heat isues
 
A lot of games aren't really that CPU demanding... and with hardware physics running on the GPU rather than the CPU that should free up some capacity of other things...

If we can offload visual occlusion checks and sqrt calcs, etc. as well it would free up even more CPU time.

Until games make more extensive use of AI and start to throw around even more objects I don't see CPU useage escalating beyond 4 cores any time soon, but this will change.
 
there is less and less game improvement from new cpus i think games need catch up with cpu. or make better software that utilizes all cores. as if u go from q9550/650 to i7 its just marginal difference in games .
 
What do you mean by this? I am buying a 2.66ghz i7 beginning of next month, should I wait until May for this 3.3 one?

I wouldnt bother waiting, the 3.3 will be $999, the 950 (3.0ghz or so) will replace the 940, the 940 will be reduced in price to current 920 retails, and the 920 will be phased out.

But if you wait, you'll just end up waiting and waiting, once tech hits a price your prepared to pay, get it, and then wait for the next upgrade.
 
Back
Top Bottom