As title really, with using the GPU F@H client its seems the GPU is hugely powerful, I also hear something about CS4 utilizing GPU power to help along. So how comes GPU technology cant be intergreated into CPU's to make em more quicker?
Most the people in GD can't decide on what hairstyle or socks to go for, do you think this is the best place to discuss such a thing?!![]()
Most the people in GD can't decide on what hairstyle or socks to go for, do you think this is the best place to discuss such a thing?!![]()
Most the people in GD can't decide on what hairstyle or socks to go for, do you think this is the best place to discuss such a thing?!![]()
GPU's are built for very specialized calculations very useful for gfx and some maths, but woeful at the stuff your cpu is.
just like your cpu is crap at gfx.
like a tank and an F1 car, one will out do the other on a track, but fail epically off road.
Although they are working on gpGPU stuff in dx11 so it will be able to help in certain tasks.
Lol, i think i am going for a spikey do tomorrow with a pair of white socks with red trim![]()
Hmmm, true, would be interesting if they could use the GPU for those kinda things in line with the CPU, like sharing the load - would make some serious difference in computing power.
As title really, with using the GPU F@H client its seems the GPU is hugely powerful, I also hear something about CS4 utilizing GPU power to help along. So how comes GPU technology cant be intergreated into CPU's to make em more quicker?
Hmmm, true, would be interesting if they could use the GPU for those kinda things in line with the CPU, like sharing the load - would make some serious difference in computing power.
Heh hard question and not possible to answer non technically. GPU's are built for floating point operations ( used a lot in GFX as tefal says). The reason they come in handy is because the cpu's ALU deals with integer arithmetic and all the floating point operations can be offloaded to the FPU/GPU
CPUs deal with floating point arithmetic too. The nature of the data is not the issue. The issue is that GPUs are designed for parallel operation, and CPUs are designed for highly flexible serial operations.
The term your kinda looking for is co-processor anyhow.
![]()
To make an analogy: Think of the CPU as a scalpel, and the GPU as a sledgehammer. You can break down a wall with a scalpel, but it will take a long time. However you can never do brain-surgery with a sledgehammer.
Wow. I knew the answer but that still made it sound better in my head!A CPU is, at heart, a serial device.
A GPU is a massively parallel device.
A CPU is far more flexible than a GPU, but when a program CAN be adapted to work with a GPU (ie when it can be broken down into a massive number of parallel threads with relatively little communication between the threads), then the GPU will be many times faster. Relatively few applications lend themselves to such multi-threading, unfortunately.
A CPU gives over most of its transistors to control logic and caching, wheras a GPU uses almost all of its logic for raw calculation:
![]()
To make an analogy: Think of the CPU as a scalpel, and the GPU as a sledgehammer. You can break down a wall with a scalpel, but it will take a long time. However you can never do brain-surgery with a sledgehammer.
Wow. I knew the answer but that still made it sound better in my head!
GPU's are dedicated to just one thing where as a CPU can be used for many differernt tasks, from decoding games, to encoding films
A CPU is, at heart, a serial device.
<stuff>
![]()
<stuff>