Well, here is something I only just found out about.
GPUs cannot downclock to full idle states when using more than 1 monitor.
I quote another forum on the subject:
This affects nVidia cards also. At LEAST if you use 2 different connection methods EG 1 monitor on DVI. 1 Monitor on DisplayPort.
I had my new 680 connected up via DP and DVI. It was more convenient at the time as my chassis is a little mis-aligned and makes plugging into 1 of the DVI ports a pain.
But after looking at some other peoples idle core frequencies and idle temps I was baffled. My idle temps were 50 Deg C with a Core freq. of 600Mhz.
Little research later and I plug both monitors into DVI. 35 Deg C idle and 300Mhz idle freq.
I NEVER knew about this and only stumbled upon it.
I am unsure if AMD cards/drivers will downclock further when using the same connectivity but nVidia seemed to of sorted this out at some point.
So, is this common knowledge among multiple monitor users? Was news to me.
Additionally, considering the huge push towards multi monitor support you think a solution might be fashioned?
If you run a 680 for Surround you have no choice but to mix monitor connectivity so you are stuck.
GPUs cannot downclock to full idle states when using more than 1 monitor.
I quote another forum on the subject:
If you recall our 5800 series article, we mentioned that AMD finally has the ability to change the clock speeds on GDDR5, using fast link retraining. In order to make FTR work without disrupting any images, FTR needs to be done during a v-blank period so that the monitor isn’t reading from the front buffer, as the front buffer will be momentarily unavailable during the FTR. This is very easy to accomplish when you only have 1 monitor, because there’s only 1 v-sync cycle to deal with.
The issue is that with multiple monitors, there’s no guarantee that the all of the monitors will be perfectly synchronized. It’s possible for the monitors to be out of sync (particularly when using different display types, i.e. a DVI and a DP), which results in flickering on any monitors not in sync with the monitor the FTR was timed off of. This is the flickering you see when you have an overclocked card, as the card is accidentally switching GDDR5 speeds when it shouldn’t be. [At the time, a card overclocked with CCC would not go in to the correct 2-monitor PP idle state]
So the reason AMD keeps cards at a higher state when multiple monitors are attached is to preventing that flickering. This means at a minimum keeping the GDDR5 at whatever it defaults to (1000mhz/1200mhz). I’m not entirely sure why the GPU is kept at a higher state too, but my best guess is that there may be performance issues with trying to draw to 2 large monitors at such low clock speeds. Or it may be that this is just easier than creating another powerplay state.
This affects nVidia cards also. At LEAST if you use 2 different connection methods EG 1 monitor on DVI. 1 Monitor on DisplayPort.
I had my new 680 connected up via DP and DVI. It was more convenient at the time as my chassis is a little mis-aligned and makes plugging into 1 of the DVI ports a pain.
But after looking at some other peoples idle core frequencies and idle temps I was baffled. My idle temps were 50 Deg C with a Core freq. of 600Mhz.
Little research later and I plug both monitors into DVI. 35 Deg C idle and 300Mhz idle freq.
I NEVER knew about this and only stumbled upon it.
I am unsure if AMD cards/drivers will downclock further when using the same connectivity but nVidia seemed to of sorted this out at some point.
So, is this common knowledge among multiple monitor users? Was news to me.
Additionally, considering the huge push towards multi monitor support you think a solution might be fashioned?
If you run a 680 for Surround you have no choice but to mix monitor connectivity so you are stuck.
Last edited: