They don't all use the same levels of GPU compute, "a compute heavy game like TombRaider" means exactly that, they use a lot of it. I don't know how you can see that as anything else, it affects one aspect of the GPU more than usual, assuming the usual is not to such level's, high dependant compute lighting, shadows, physics, 'realistic hair rendering'
...
A GPU is a compute device. Everything is compute... There are different types of data complexity, and different degrees of granularity, but it's all compute. That's what GPUs do, and that's what games require in order to display the graphics. This is what I'm trying to explain... It's not a complex concept.
Unless you want to go back 10-15 years to a time when the ability to display textures was the limiting factor then it's all compute. The key is passing the data through to the compute units (the SPs) as efficiently as possible, which is a mixture of low-level hardware and software. It has been since the Geforce 2 era.
Saying a one game or another is "particularly compute heavy" is nonsense. Tomb raider may have a higher proportion of physics effects than some other games, which place different requirements on the design of a GPU and the drivers that control it (see my above post for details on the progression from "smooth" to "lumpy" data - and yes those are technical terms). But it's all compute. That's. What. GPUs. Do.
That's why they're designed to perform as many computations (i.e. floating point operations; "FLOPS") as possible. And this is done by having a large number of processing units ("SPs") running, in parallel, at as high a speed as possible. Double the number of computations the device is capable of, and you double potential performance. The key is attempting to maintain efficiency through the pipeline with the increased data throughput, and this is the key to "efficient" GPU design generation after generation.
Last edited: