• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Big GPUs are set to die

Look at the majority of computational requirements now - they require a massive leap in serial performance simply because the problem could be split into parallel and results delivered faster.

Big server farms are crying out for increased performance per meter of floor space and to reduce the cooling/power requirements. All of these are addressed by putting more into the silicon and reducing the size of each transistor..

Well look at AMD/ATI. Take their current CPU architecture, replace the opteron & memory with a compined GPU/CPU and DDR4 for each socket. Then attach each by an HT bus to the other.

You have the basics for:
a) Scaleable parallel home PC market
b) Scalable parallel OEM/Console market
c) Scalable parallel Super computer market.


The design of the seperate GPU is also following the same pattern too. So it does seem reasonable and logical that the R600 is the bridging release between old seperate GPU/CPU and GCPUs.
Look at both nVidia and ATI's moves in design and the components are becoming more componentised and less dependant on each other. It obviously allows a jump in performance - look at the G80 performance increase.
I would say that ATI's system may work radically differently but they're both ripe for adding to parallel processing.
 
Last edited:
I also think it has to be the next logical step (smaller/less power consumption/heat); the laptop and small form factor market spring to mind.

One day soon maybe, our grandchildren will laugh at how we used massive metal boxes with fans, motherboards, slot in cards etc for our computers...They will most probably be using ones the size of a Rubiks cube for there home PC and that of a dice for there laptops (with projected holographic screens and keyboards)... Miniaturization has to be the way forwards because in terms of energy and heat it is more efficient...

I do think this shift we are seeing in graphics to software power ratio (a single graphic card now seems to be able to meet the demands of the most intensive simulation or game) is very encouraging and good news for us. May ATI and Nvdia battles be long and hard :)

Sorry to babble on so...But you must understand that it's 9.25 AM and I have still not had my bacon and eggs...

S!ap
 
Look at the majority of the market:
1) Games consoles - one or two GCPUs.. non-upgradable
2) Desktop - one or two GCPUs
3) Workstation - a desktop system with additional empty GCPU sockets.
4) Server - a larger workstation.. servers/blades...
5) Supercomputer - a large set of servers/blades...
6) Mobile - a single GCPU.

If the GCPU can save power by shutting down it's unused cores/pipelines then it adds to it's already lowered power consumption by combining the cores.

The chip is scalable, so you don't have to re-tool or have a seperate production line.
 
LoadsaMoney said:
Yeah R600 is due around Vista time, and the refresh of G80 is also due around the same time. :)

Vista is out in Jan (R600 rumoured Feb), so I seriously doubt NV will release the refresh of the G80, when it has only just been released. Sure, they'll probably release the mainstream versions, but that's not a refresh.
 
the refresh of G80 was announced when the G80 was released, its slated for Feb around R600 time, i bet they already have it waiting in the wings for R600. :)
 
Last edited:
Yes, the article is speculation, but the chances are it's pretty damn close to what is going to happen.

Let's all take a trip down memory lane...

Remember a time in the CPU arena when MHz was king? Intel and AMD were duking it out for the highest number of cycles per clock per pipeline: as AMD's pipeline was much shorter, they didn't need (and couldn't get) higher clock frequencies to do the same amount of work. Intel, however, had to spin their P4s up to goodness knows how fast (I think the highest wound up at 3.8GHz) to do less work. And the amount of heat the Prescotts produced in the process earned them nicknames such as Preshot, storage heaters, etc.

AMD then pack two of their CPUs into one piece of hardware and we all sing the merits of dual-core processing (which has all the SMP crowd laughing harder than they have been since way back when). As such, since the advent of dual core CPUs has brought about a flurry of activity in the gaming arena with Valve demoing multicore-enabled code, Alan Wake doing its business, etc. And we all live happily ever after, safe in the knowledge that we cando more.

For the past year or so, we have been sitting through a similar rendition of the CPU furnace soap opera in the field of GPUs: Nvidia kind of sat this one out but not entirely, whilst ATi has taken centre stage (and don't look like wanting to give up the spotlight) with GPUs that eat up vast quantities of power. Now, we have G80 which isn't exactly frugal (true, it's performance makes up for it) and a rumoured R600 from ATi which actually requires [vicious rumour] a fusion powerplant in the back garden for stability[/vr].

So, would anybody like to take a guess as to where the GPU industry might possibly go in the quest for more performance? I'll give you a hint. Or two:

1) Dare I say it: parallelism? Loadsa people have been playing with Crossfire/SLi and not even realised what kind of stepping stone they were playing with. Guinea pigs, anybody? Nvidia's GX2 was awesome, yes, but also conclusive proof that they could get two cores to work happily together on one theoretical card. The Galaxy 7900GT Dual Core Masterpiece was another such demonstration of 'power'...

2) I said it in another thread: the slightly less well trumpeted fact about ATi's R300 chip (other than it whupped its competition at the time) was that it could be connected to another R300 chip. Which lead to the emergence of some niche market dual GPU 9700Pros that were built for specific requirements. And which were all trumpeted as BS by almost everybody on these forums.

So, given these two bits of 'speculation', would anybody like to have another guess and why the Inquirer has postulated the advent of multiple, slower cores doing more work? Aside from laughing their heads off at being paid to be computer journalists, they are merely applying the "to know where we're going, we need to know where we've come from" philosophy.

And you know what? I agree with them. But then, I am entitled to my own opinion.
 
Back
Top Bottom