• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is parallel computing detracting from gaming performance?

Soldato
Joined
11 Oct 2007
Posts
2,597
I have just been wondering if APP is hurting the performance of graphics cards? We are now seeing graphics cards becoming more and more optimized towards computational tasks as opposed to rendering and I am wondering if these modifications are totally conductive to gaming performance.

I am wondering If it may not be better (for the consumer) to branch these technologies into their own products. Would we see a gain in gaming if the APP side of things was ignored?
 
As far as performance goes it doesn't really have much impact. However these features often require some additional features in hardware which does push up cost, power and heat a little over what is needed for gaming.

There may be some instances where operations are being queued for batch processing as its a performance advantage for compute but results in a time penalty for gaming useage but thats probably minimal, for the most part the extra hardware for compute is by passed for gaming useage and neither helps or hinders and there isn't enough extra hardware that you could say removing it to add extra SPs for instance would improve gaming performance in a noticeable way.

On the flipside increasingly developers are looking to leverage compute to make advances in things like ingame AI, advanced geometry and texture functionality, etc. which can gain advantages from compute orientated features and while it might impact on rendering performance unless offloaded to a dedicated card it would enrich the gaming experience.
 
Last edited:
hmmm. Do you think that with a dual market base this could have driven up prices somewhat? Especially with AMD cards which currently seem to excel with the likes of bitcoin and the *@home projects.

I have actually wondered why AMD have not released stripped down versions of the x2 cards for professional use manly with much less VRAM etc because from what I can see most (maybe all?) of APP tasks don't seem to benefit from large amounts of VRAM or full PCI-E bandwidth as I have seen people running these cards in modified PCI-E x1 slots.
 
...However these features often require some additional features in hardware which does push up cost, power and heat a little over what is needed for gaming.

This is the long and short of it.

Compute applications require more flexibility from the GPU - the ability to schedule threads more efficiently, more interconnects between compute units, more cache etc. All these things "cost" transistors.

So, comparing a 'compute-based' architecture against an architecture designed purely for graphics applications: For equivalent performance the compute architecture will require more transistors, and therefore a larger die-area and power draw. So from this perspective - yes, compute architectures are "detracting from gaming performance".


There is a flip-side though:

Compute architectures are much more effective for handling more complex operations - such as tessellation or physics processing. Right now these have very little impact in games, since developers must put most of their development effort into features that everyone can use. While ever the consoles don't have these advanced compute features we won't see them used in anything more than a superficial way.

...But in principle, there are amazing things that can be done with tessellation, and other advanced GPU features. For example, using tessellation, displacement maps can be stored like a texture and used to deform objects dynamically. For example; you could shoot a metal plate and the bullets could leave real dents in the wall. Or you could procedurally generate trees by extruding branches. As for hardware physics - if we had a universal API (i.e. not Physx) then there is the possibility for massive-scale destruction using the GPU to perform the calculations.

So, in a way, the transition to a compute-based architecture is just the natural evolution of the GPU. Until we see games that fully utilise these features they will be something of a "waste", from the point of view of pixel-shading power. But when the next-gen consoles are released, complete with compute-capable architectures, we will start to see the new features used in a meaningful and creative way.

So the short answer is: Right now - yes. But eventually we should see them used to implement advanced gameplay features.
 
Meh, the very main reason graphics cards are being used for GPGPU work is the fact that graphics cards were brilliant at parallel computing. IE people keep going on about GPGPU taking the focus of gaming, but ultimately in increasing DP performance Nvidia and AMD have been increasing SP performance and gaming performance.

Gpu's have been used for this kind of work for YEARS, well before either were branded GPGPU's and while some transistors are used for those functions, its not quite as many as you think and a lot of bits have multiple uses.

ULtimately DP performance can't increase drastically without more shaders, and more shaders means more gaming performance.

As for stripped down versions, not much use, memory isn't the biggest cost of the card and, selling cheaper cards doesn't help them make more profits so there isn't much reason. Then for everyone who DOES want more memory, they are screwed or you need yet another SKU, its simply not worth the bother in that sense.

It's not like there's CPU at one end of the scale, GPU at the other, and GPGPU is right in the middle and moving towards it hurts GPU performance. If that was the case they'd simply make something in the middle and make a killing off it. Intel is going for something somewhere in the middle, I'm sure it will be great for some very specific software but there is a reason it hasn't been "epic" yet. Essentially GPGPU is at the same end of the scale as GPU.

Gpu's have been multifunction for ruddy ages. Let's say they added a video decode block that took up 10% of transistors 5 years ago, in more recent gpu's that has shrunk to be barely 2% of transistors, and gpgpu features now take up 8% of the transistors...... that still add's up to 10% of the transistors not being directly for gaming performance.

That is WAY over simplified, but every gen the "non gaming" features shrink, and adding more transistors for non gaming performance doesn't necessarily take up a bigger percentage of the core than any previous generation.

You can bank on almost every generation of GPU's having some new non gaming specific feature, that takes up transistors, but ultimately is both worth having and isn't killing gaming performance.

The biggest thing Nvidia and AMD will face in the next few years is increasing power usage and just where is a sane limit to stop using more power for gaming.
 
Gpu's have been multifunction for ruddy ages. Let's say they added a video decode block that took up 10% of transistors 5 years ago, in more recent gpu's that has shrunk to be barely 2% of transistors, and gpgpu features now take up 8% of the transistors...... that still add's up to 10% of the transistors not being directly for gaming performance.

That logic only applies for fixed-function units, which have a fixed performance requirement. For example, the hardware required to setup the instructions to decode video is fairly well fixed (i.e. it performs a fixed function, and does not need to "scale-up" as the GPU grows).

In contrast, compute features are linked to the connectivity of the GPU, and so must scale with it. In the most basic terms; if you have twice as much data flow, and twice as many processing units, then you need (at least) twice as much interconnect logic to schedule the data. Since compute features require improvements to data handling and connectivity, the number of transistors required will scale with the GPU processing power.

The relative 'cost' of adding compute capabilities will not diminish as the GPU size grows.
 
Gpu's have been used for this kind of work for YEARS, well before either were branded GPGPU's and while some transistors are used for those functions, its not quite as many as you think and a lot of bits have multiple uses.

hmm well I know my humble 3870 doesn't support most, perhaps all compute applications like bitcoin, *@home etc. The 4xxx was I believe the first from AMD. As for Nvidea, I do recall them knocking out dedicated boxes for parallel computing, I think with the 8 series. Was there anything before this?
 
Yes. However it does open up interesting possibilities. As they are no longer just graphics processors it will be interesting to see game designers come up with cool new games that leverage the massive-parallelism made available. The simplest step will be increasingly realistic physics running on both CPU and GPU. PhysX is a step in this direction but is confined to nvidia. it will be good to see open physics standards running on both GPUs as OpenCL matures. Or perhaps running on AMD gpus through CUDA thanks to efforts such as GPUOcelot.

But the more interesting thing that I'm looking forward to is increasingly realistic AI. Most games use relatively simplistic AI these days. Finite State Machines (FSMs) cover most types of AI seen in games, while there are some incredibly powerful and interesting techniques in the field that are not used at all as they require massively parallel processing. As GPUGPU matures I think we'll start seeing more of this.

Such developments will enhance the quality of gameplay, and not just graphics.
 
Back
Top Bottom